diff --git a/v1.13/snaps-kubernetes/PRODUCT.yaml b/v1.13/snaps-kubernetes/PRODUCT.yaml new file mode 100644 index 0000000000..a12d07cd9a --- /dev/null +++ b/v1.13/snaps-kubernetes/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: CableLabs +name: SNAPS-Kubernetes +version: v1.2 +website_url: https://github.com/cablelabs/snaps-kubernetes +documentation_url: https://github.com/cablelabs/snaps-kubernetes/blob/master/doc/source/install/install.md +type: installer +description: 'An installation tool to install Kubernetes on a Linux machines that have been initialized with SNAPS-Boot.' +product_logo_url: https://brandfolder.com/cablelabs/attachments/oozlse-fpkc80-8r1zme?dl=true&resource_key=oopwe3-ac5uj4-gf3qau&resource_type=Brandfolder \ No newline at end of file diff --git a/v1.13/snaps-kubernetes/README.md b/v1.13/snaps-kubernetes/README.md new file mode 100644 index 0000000000..9e702b141a --- /dev/null +++ b/v1.13/snaps-kubernetes/README.md @@ -0,0 +1,825 @@ +# Installation + +This document serves as a user guide specifying the steps/actions user must +perform to bring up a Kubernetes cluster using SNAPS-Kubernetes. The document +also gives an overview of deployment architecture, hardware and software +requirements that must be fulfilled to bring up a Kubernetes cluster. + +This document covers: + +- High level overview of the SNAPS-Kubernetes components +- Provisioning of various configuration yaml files +- Deployment of the SNAPS-Kubernetes environment + +The intended audience of this document includes the following: + +- Users involved in the deployment, maintenance and testing of SNAPS-Kubernetes +- Users interested in deploying a Kubernetes cluster with basic features + +## 1 Introduction + +### 1.1 Terms and Conventions + +The terms and typographical conventions used in this document are listed and +explained in below table. + +| Convention | Usage | +| ---------- | ----- | +| Host Machines | Machines in data centers which would be prepared by SNAPS-Kubernetes to serve control plane and data plane services for Kubernetes cluster. SNAPS-Kubernetes will deploy Kubernetes services on these machines. | +| Management node | Machine that will run SNAPS-Kubernetes software. | + +### 1.2 Acronyms + +The acronyms expanded below are fundamental to the information in this document. + +| Acronym | Explanation | +| ------- | ----------- | +| PXE | Preboot Execution Environment | +| IP | Internet Protocol | +| COTS | Commercial Off the Shelf | +| DHCP | Dynamic Host Configuration Protocol | +| TFTP | Trivial FTP | +| VLAN | Virtual Local Area Network | + +## 2 Environment Prerequisites + +Current release of SNAPS-Kubernetes requires the following Hardware and software +components. + +### 2.1 Hardware Requirements + +#### Host Machines + +| Hardware Required | Description | Configuration | +| ----------------- | ----------- | ------------- | +| Servers with 64bit Intel AMD architecture | Commodity Hardware | 16GB RAM, 80+ GB Hard disk with 2 network cards. Server should be network boot enabled. | + +#### Management Node + +| Hardware Required | Description | Configuration | +| ----------------- | ----------- | ------------- | +| Server with 64bit Intel AMD architecture | Commodity Hardware | 16GB RAM, 80+ GB Hard disk with 1 network card. | + +### 2.2 Software Requirements + +| Category | Software version | +| -------- | ---------------- | +| Operating System | Ubuntu 16. | +| Programming Language | Python 2.7.12 | +| Automation | > Ansible 2.4 | +| Framework | Kubernetes v1.14.3 | +| Containerization | Docker V17-03-CE | + +### 2.3 Network Requirements + +- At least one network interface cards required in all the node machines +- All servers should use the same naming scheme for ethernet ports. If ports on of the servers are named as eno1, eno2 etc. then ports on other servers should be named as eno1, eno2 etc. +- All host machines and the Management node should have access to the same networks where one must be routed to the Internet. +- Management node shall have http/https and ftp proxy if node is behind corporate firewall. + +## 3 Deployment View and Configurations + +Project SNAPS-Kubernetes is a Python based framework leveraging +Ansible playbooks, Kubespray and a workflow Engine. To provision your +baremetal host, it is recommended but not required to leverage SNAPS-Boot. + +![Deployment and Configuration Overview](https://raw.githubusercontent.com/wiki/cablelabs/snaps-kubernetes/images/install-deploy-config-overview-1.png?token=Al5dreR4VK2dsb7h6D5beMZmWnkZpNNNks5bTmfhwA%3D%3D) + +![Deployment and Configuration Workflow](https://raw.githubusercontent.com/wiki/cablelabs/snaps-kubernetes/images/install-deploy-config-workflow-1.png?token=Al5drVkAVPNQfJcPFNezfl1WIVYoJLbAks5bTme3wA%3D%3D) + +SNAPS-Kubernetes executes on a server that is responsible for deploying +the control and compute services on servers running Ubuntu 16.04. The +two stage deployment is outlined below. + +1. Provision nodes with 16.04 and configure network (see snaps-boot ) +1. Build server setup (snaps-kubernetes) + 1. Node setup - Install prerequisites (i.e. docker-ce 17.03) + 1. Kubernetes cluster deployment via Kubespray + 1. Post installation processes such as CNI, node labeling, and metrics server installation + +## 4 Kubernetes Cluster Deployment + +User is required to prepare a configuration file that should look like + +and the file's location will become the -f argument to the Python main +iaas_launch.py. Please see configuration parameters descriptions below. + +### 4.1 Project Configuration + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| Project_name | Y | Project name of the project (E.g. My_project). Using different project name user can install multiple cluster with same SNAPS-Kubernetes folder on different host machines. +| kubespray_branch | N | The name of the CableLabs fork of kubespray (default: 'master'). +| Git_branch | Y | Branch to checkout for Kubespray (E.g. master) | +| Version | Y | Kubernetes version (E.g. v1.14.3) | +| enable_metrics_server | N | Flag used to enable or disable Metric server. Value: True/False (Default: False) | +| enable_helm | N | Flag used to install Helm. Value: True/False (Default: False) | +| Exclusive_CPU_alloc_support | N | Should Cluster enforce exclusive CPU allocation. Value: True/False ***Currently not working*** | +| enable_logging | N | Should Cluster enforce logging. Value: True/False | +| log_level | N | Log level(fatal/error/warn/info/debug/trace) | +| logging_port | N | Logging Port (e.g. 30011) | + +### 4.2 Basic Authentication + +Parameters specified here are used to define access control mechanism for the +cluster, currently only basic http authentication is supported. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| user_name | N | User name to access the cluster | +| user_password | N | User password to access the host machine | +| user_id | N | User id to access the cluster | + +Define this set of parameters for each user, required to access the cluster. + +### 4.3 Node Configuration + +Parameters defined here specify the cluster nodes, their roles, ssh access +credential and registry access. This will come under tag node_configuration. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
HostDefine this set of parameters for each host machine (a separate host section should be defined for each host machine).
+ HostnameNHostname to be used for the machine. (It should be unique across the cluster)
+ ipNIP of the primary interface (Management Interface, allocated after OS provisioning).
+ registry_portNRegistry port of the host/master. Example: “2376 / 4386”
+ node_typeNNode type (master, minion).
+ label_keyNDefine the name for label key. Example: zone
+ label_valueNDefine the name for label value. Example: master
+ PasswordNPassword of host machine
+ UserNUser id to access the root user of the host machine
+ +### 4.4 Docker Repository + +Parameters defined here controls the deployment of private docker repository for +the cluster. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| Ip | N | Severe IP to host private Docker repository | +| Port | N | Define the registry Port. Example: - “4000” | +| password | N | Password of docker machine. Example: - ChangeMe | +| User | N | User id to access the host machine. | + +### 4.5 Proxies + +Parameters defined here specifies the proxies to be used for internet access. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| ftp_proxy | Y | Proxy to be used for FTP. (For no proxy: give value as “”) | +| http_proxy | Y | Proxy to be used for HTTP traffic. (For no proxy: give value as “”) | +| https_proxy | Y | Proxy to be used for HTTPS traffic. (For no proxy: give value as “”) | +| no_proxy | N | Comma separated list of IPs of all host machines. Localhost 127.0.0.1 should be included here. | + +### 4.6 Persistent Volume + +SNAPS-Kubernetes supports 3 approaches to provide storage to container +workloads. + +- Ceph +- HostPath +- Rook - A cloud native implementation of Ceph + +#### Ceph Volume + +***Note: Ceph support is currently broken an may be removed in the near future*** + +Parameters specified here control the installation of CEPH process on cluster +nodes. These nodes define a CEPH cluster and storage to PODs is provided from +this cluster. SNAPS-Kubernetes creates a PV and PVC for each set of +claims_parameters, which can later be consumed by application pods. + +*Required:* No + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
hostDefine this set of parameters for each host machine.
+ hostnameYHostname to be used for the machine. (It should be unique across the cluster)
+ ipYIP of the primary interface
+ node_typeYNode type (ceph_controller/ceph_osd).
+ passwordYPassword of host machine
+ userYUser id to access the host machine
+ Ceph_claimsDefine this set only for ceph_controller nodes
+ + claim_parameteresUser can define multiple claim parameters under a host
+ + + claim_nameYDefine name of persistent volume claim. For Ex. "claim2"
+ + + storageYDefines storage capacity of persistent volume claim. For Ex. "4Gi"
+ second_storageYList of OSD storage device. This field should be defined only if Node_type is ceph_osd
+ +#### Host Volume + +Parameters specified here are used to define PVC and PV for HostPath volume +type. SNAPS-Kubernetes creates a PV and PVC for each set of claim_parameters, +which can later be consumed by application pods. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
Host_VolumeUser can define multiple claims under this section
+ claim_parameteresA tag in yaml file
+ + Claim_nameYDefine name of persistent volume claim. For Ex. "claim4"
+ + storageYDefines storage capacity of Host volume claim. For Ex. "4Gi"
+ +#### Rook Volume + +Parameters specified here are used to define PV for a Rook volume. +SNAPS-Kubernetes creates a PV for each volume configured +which can later be consumed by application pods. + +*Required:* No + + + + + + + + + + + + +
ParameterOptionalityDescription
Rook_VolumenoUser can define multiple volumes under this section
+ +Rook_Volume Dictionary List keys + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
namenoPV name (cannot contain '_' or special characters {'-' ok})
sizenoThe volume size in GB
pathnoThe host_path value
+ +### 4.8 Networks + +SNAPS-Kubernetes supports following 6 solutions for cluster wide networking: + +- Weave +- Flannel +- Calico +- MacVlan +- SRIOV +- DHCP + +Weave, Calico and Flannel provide cluster wide networking and can be used as +default networking solution for the cluster. MacVlan and SRIOV on the other hand +are specific to individual nodes and are installed only on specified nodes. + +SNAPS-Kubernetes uses CNI plug-ins to orchestrate these networking solutions. + +#### Default Networks + +Parameters defined here specifies the default networking solution for the +cluster. + +SNAPS-Kubernetes install the CNI plugin for the network type defined by +parameter `networking_plugin` and creates a network to be consumed by Kubernetes +pods. User can either choose weave, flannel or calico for default networking +solution. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| networking_plugin | N | Network plugin to be used for default networking. Allowed values are weave, contiv, flannel, calico, cilium (*** does not work***) | +| service_subnet | N | Subnet to be used for Kubernetes service deployments (E.g. 10.241.0.0/18) | +| pod_subnet | N | Subnet for pods networking (E.g. 10.241.64.0/18) | +| network_name | N | Default network to be created by SNAPS-Kubernetes. Note: The name should not contain any Capital letter and “_”. | +| isMaster | N | The default route will point to the primary network. One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0” “isMaster should be True for one plugin.” Value: true/false | + +#### Multus Networks + +Multus networking solution is required to support application pods with more +than one network interface. It provides a way to group multiple networking +solution and invoke them as required by the pods. + +SNAPS-Kubernetes supports Multus as a CNI plugin with following networking +providers: + +- Weave +- Flannel +- SRIOV +- MacVlan +- DHCP + +#### CNI + +List of network providers to be used under Multus. User can define any +combination of weave, flannel, SRIOV, Macvlan and DHCP. + +##### CNI Configuration + +Parameters defined are specifies the network subnet, gateway, range and other +network intrinsic parameters. + +> **Note:** User must provide configuration parameters for each network provider specified under CNI tag (mentioned above). + +#### Flannel + +***Flannel is currently broken and may comprimise the integrity of your cluster*** + +Define this section when Flannel is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
flannel_networks
+ network_nameNName of the network. SNAPS-Kubernetes creates a Flannel network for the cluster with this name. Note: The name should not contain any Capital letter and “_”.
+ networkNNetwork range in CIDR format to be used for the entire flannel network.
+ subnetNSubnet range for each node of the cluster.
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### Weave + +***Weave is currently broken and may comprimise the integrity of your cluster*** + +Define this section when Weave is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
weave_networks
+ network_nameNName of the network. SNAPS-Kubernetes creates a Weave network for the cluster with this name. Note: The name should not contain any Capital letter and “_”.
+ subnetNDefine the Subnet for network.
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### DHCP + +No configuration required. When DHCP CNI is given, SNAPS-Kubernetes configures +DHCP services on each node and facilitate dynamic IP allocation via external +DHCP server. + +#### Macvlan + +***This CNI option is being exercied and validated in CI*** + +Define this section when Macvlan is included under Multus. + +User should define these set of parameters for each host where Macvlan network is to be created. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
macvlan_networksDefine this section for each node where Macvlan network is to be deployed
+ hostnameNHostname of the node where Macvlan network is to be created
+ parent_interfaceNKubernetes creates a Vlan tagged interface for the Macvlan network. The tagged interface is created from the interface name defined here.
+ vlanidNVLAN id of the network
+ ipNIP to be assigned to vlan tagged interface. SNAPS-Kubernetes creates a separate Vlan tagged interface to be used as primary interface for Macvlan network.
+ network_nameNThis field defines the macvlan network name. Note: The name should not contain any Capital letter and "_"
+ masterNUse field parent_interface followed by vlan_id with a dot in between (parent_interface.vlanid).
+ typeNhost-local or dhcp. If dhcp used, SNAPS-Kubernetes configures this network to ask IPs from external DHCP server. If host-local used, SNAPS-Kubernetes configures + this network to ask IPs from IPAM.
+ rangeStartNFirst IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ rangeEndNLast IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ gatewayNDefine the Gateway
+ routes_dstNUse value 0.0.0.0/ (Not required in case type is dhcp).
+ subnetNDefine the Subnet for Network in CIDR format (Not required in case type is dhcp).
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### SRIOV + +***SRIOV is currently untested and should be used with caution*** + +Define this section when SRIOV is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
hostDefine these set of parameters for each node where SRIOV network is to be deployed
+ hostnameHostname of the node
+ networksDefine these set of parameters for each SRIOV network be deployed on the host. User can create multiple network on the same host.
+ + network_nameNName of the SRIOV network.
+ + sriov_intfNName of the physical interface to be used for SRIOV network (the network adaptor should be SRIOV capable).
+ + typeNhost-local or dhcp. If dhcp used, SNAPS-Kubernetes configures this network to ask IPs from external DHCP server. If local-host used, SNAPS-Kubernetes configures this network to ask IPs from IPAM.
+ + rangeStartNFirst IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ + rangeEndNLast IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ + sriov_gatewayNDefine the Gateway
+ + sriov_subnetNDefine the IP subnet for the SRIOV network.
+ + isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ + dpdk_enableYEnable or disable the dpdk.
+ +## 5 Installation Steps + +### 5.1 Kubernetes Cluster Deployment + +#### 5.1.1 Obtain snaps-kubernetes + +Clone snaps-kubernetes: +```Shell +git clone https://github.com/cablelabs/snaps-kubernetes +``` + +#### 5.1.2 Configuration + +Go to directory `{git directory}/snaps-kubernetes/snaps_k8s` + +Modify file `k8s-deploy.yaml` for provisioning of Kubernetes nodes on cloud +cluster host machines (Master/etcd and minion). Modify this file according to +your set up environment. Refer to section 3.3. + +#### 5.1.3 Installation + +Ensure build server has python 2.7 and python-pip installed and the user account executing iaas_launch.py must has passwordless sudo access on the build server and must has their ~/.ssh/id_rsa.pub injected into the 'root' user of each host machine. + +Setup the python runtime (note: it is recommended to leverage a virtual +python runtime especially if the build server also performs functions +other than simply executing snaps-kubernetes): + +```Shell +pip install -r {path_to_repo}/requirements-git.txt +pip install -e {path_to_repo} +``` + +Ensure all host machines must have python and SSH installed, which should +be already completed if using snaps-boot to perform the initial setup. +(i.e. apt-get install -y python python-pip) + +Run `iaas_launch.py` as shown below: + +```Shell +python {path_to_repo}/iaas_launch.py -f {absolute or relative path}/k8s-deploy.yaml -k8_d +``` + +This will install Kubernetes service on host machines. The Kubernetes +installation will start and will get completed in ~60 minutes. + +> Note: if installation fails due to Error “FAILED - RETRYING: container_download | Download containers if pull is required or told to always pull (all nodes) (4 retries left).” please check your internet connection. + +Kubectl service will also be installed on bootstrap node. + +After cluster installation, if user needs to run kubectl command on bootstrap +node, please run: + +```Shell +export KUBECONFIG={project artifact dir}/node-kubeconfig.yaml +``` + +### 4.2 Cleanup Kubernetes Cluster + +Use these steps to clean an existing cluster. + +Go to directory `~/snaps-kubernetes` + +Clean up previous Kubernetes deployment: + +```Shell +python iaas_launch.py -f snaps_k8s/k8s-deploy.yaml -k8_c +``` diff --git a/v1.13/snaps-kubernetes/e2e.log b/v1.13/snaps-kubernetes/e2e.log new file mode 100644 index 0000000000..52269a60c6 --- /dev/null +++ b/v1.13/snaps-kubernetes/e2e.log @@ -0,0 +1,11188 @@ +I0623 21:12:21.050224 20 test_context.go:358] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-365229432 +I0623 21:12:21.050318 20 e2e.go:224] Starting e2e run "96609930-95fb-11e9-9086-ba438756bc32" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1561324340 - Will randomize all specs +Will run 201 of 1946 specs + +Jun 23 21:12:21.272: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +Jun 23 21:12:21.277: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jun 23 21:12:21.296: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jun 23 21:12:21.343: INFO: 14 / 14 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jun 23 21:12:21.343: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. +Jun 23 21:12:21.343: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jun 23 21:12:21.358: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Jun 23 21:12:21.358: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'nodelocaldns' (0 seconds elapsed) +Jun 23 21:12:21.358: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) +Jun 23 21:12:21.358: INFO: e2e test version: v1.13.0 +Jun 23 21:12:21.359: INFO: kube-apiserver version: v1.13.5 +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:12:21.359: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +Jun 23 21:12:21.434: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: validating cluster-info +Jun 23 21:12:21.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 cluster-info' +Jun 23 21:12:22.049: INFO: stderr: "" +Jun 23 21:12:22.049: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443\x1b[0m\n\x1b[0;32mcoredns\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443/api/v1/namespaces/kube-system/services/coredns:dns/proxy\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:12:22.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-gjwdt" for this suite. +Jun 23 21:12:28.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:12:28.101: INFO: namespace: e2e-tests-kubectl-gjwdt, resource: bindings, ignored listing per whitelist +Jun 23 21:12:28.148: INFO: namespace e2e-tests-kubectl-gjwdt deletion completed in 6.095075606s + +• [SLOW TEST:6.789 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl cluster-info + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should check if Kubernetes master services is included in cluster-info [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:12:28.148: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Jun 23 21:12:28.218: INFO: PodSpec: initContainers in spec.initContainers +Jun 23 21:13:18.571: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9b0d5e24-95fb-11e9-9086-ba438756bc32", GenerateName:"", Namespace:"e2e-tests-init-container-99mnb", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-99mnb/pods/pod-init-9b0d5e24-95fb-11e9-9086-ba438756bc32", UID:"9b0fa29c-95fb-11e9-8956-98039b22fc2c", ResourceVersion:"1918", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696921148, loc:(*time.Location)(0x7b33b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"218538167"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hncjg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001463f40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hncjg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hncjg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hncjg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00162d918), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"minion", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001972960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00162d9a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00162d9c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00162d9c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00162d9cc)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921148, loc:(*time.Location)(0x7b33b80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921148, loc:(*time.Location)(0x7b33b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921148, loc:(*time.Location)(0x7b33b80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921148, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.197.149.12", PodIP:"10.251.128.6", StartTime:(*v1.Time)(0xc00179aac0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00045c150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00045c310)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e1d37663cf228b23112800b7768d2dd2f5134a69214abc7335624fd4456af4e8"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00179ab00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00179aae0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:13:18.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-99mnb" for this suite. +Jun 23 21:13:40.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:13:40.600: INFO: namespace: e2e-tests-init-container-99mnb, resource: bindings, ignored listing per whitelist +Jun 23 21:13:40.670: INFO: namespace e2e-tests-init-container-99mnb deletion completed in 22.092791157s + +• [SLOW TEST:72.522 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should not start app containers if init containers fail on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:13:40.671: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name cm-test-opt-del-c6484e86-95fb-11e9-9086-ba438756bc32 +STEP: Creating configMap with name cm-test-opt-upd-c6484f21-95fb-11e9-9086-ba438756bc32 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-c6484e86-95fb-11e9-9086-ba438756bc32 +STEP: Updating configmap cm-test-opt-upd-c6484f21-95fb-11e9-9086-ba438756bc32 +STEP: Creating configMap with name cm-test-opt-create-c6484f63-95fb-11e9-9086-ba438756bc32 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:15:07.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-cw28l" for this suite. +Jun 23 21:15:29.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:15:29.396: INFO: namespace: e2e-tests-projected-cw28l, resource: bindings, ignored listing per whitelist +Jun 23 21:15:29.415: INFO: namespace e2e-tests-projected-cw28l deletion completed in 22.093889378s + +• [SLOW TEST:108.745 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:15:29.416: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Jun 23 21:15:29.487: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:15:30.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-custom-resource-definition-4xb4p" for this suite. +Jun 23 21:15:36.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:15:36.601: INFO: namespace: e2e-tests-custom-resource-definition-4xb4p, resource: bindings, ignored listing per whitelist +Jun 23 21:15:36.654: INFO: namespace e2e-tests-custom-resource-definition-4xb4p deletion completed in 6.098048769s + +• [SLOW TEST:7.239 seconds] +[sig-api-machinery] CustomResourceDefinition resources +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Simple CustomResourceDefinition + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:15:36.655: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Jun 23 21:15:36.743: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0b6bdaeb-95fc-11e9-8956-98039b22fc2c", Controller:(*bool)(0xc001a7ccd6), BlockOwnerDeletion:(*bool)(0xc001a7ccd7)}} +Jun 23 21:15:36.752: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"0b6aae4b-95fc-11e9-8956-98039b22fc2c", Controller:(*bool)(0xc0019d06a6), BlockOwnerDeletion:(*bool)(0xc0019d06a7)}} +Jun 23 21:15:36.757: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0b6b3371-95fc-11e9-8956-98039b22fc2c", Controller:(*bool)(0xc001a7ceda), BlockOwnerDeletion:(*bool)(0xc001a7cedb)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:15:41.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-lnsbn" for this suite. +Jun 23 21:15:47.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:15:47.846: INFO: namespace: e2e-tests-gc-lnsbn, resource: bindings, ignored listing per whitelist +Jun 23 21:15:47.863: INFO: namespace e2e-tests-gc-lnsbn deletion completed in 6.091534064s + +• [SLOW TEST:11.209 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:15:47.864: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 23 21:15:47.959: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:47.961: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:47.961: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:48.966: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:48.969: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:48.969: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:49.966: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:49.969: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:49.969: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:50.966: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:50.969: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:50.969: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:51.966: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:51.970: INFO: Number of nodes with available pods: 1 +Jun 23 21:15:51.970: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Jun 23 21:15:51.986: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:51.989: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:51.989: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:52.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:52.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:52.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:53.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:53.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:53.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:54.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:54.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:54.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:55.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:55.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:55.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:56.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:56.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:56.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:57.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:57.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:57.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:58.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:58.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:58.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:15:59.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:15:59.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:15:59.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:00.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:00.995: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:00.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:01.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:01.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:01.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:02.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:02.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:02.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:03.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:03.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:03.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:04.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:04.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:04.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:05.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:05.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:05.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:06.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:06.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:06.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:07.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:07.995: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:07.995: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:08.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:08.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:08.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:09.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:09.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:09.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:10.992: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:10.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:10.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:11.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:11.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:11.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:12.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:12.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:12.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:13.992: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:13.995: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:13.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:14.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:14.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:14.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:15.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:15.995: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:15.995: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:16.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:16.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:16.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:17.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:17.995: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:17.995: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:18.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:18.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:18.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:19.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:19.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:19.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:20.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:20.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:20.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:21.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:21.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:21.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:22.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:22.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:22.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:23.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:23.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:23.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:24.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:24.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:24.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:25.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:25.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:25.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:26.994: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:26.997: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:26.997: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:27.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:27.996: INFO: Number of nodes with available pods: 0 +Jun 23 21:16:27.996: INFO: Node minion is running more than one daemon pod +Jun 23 21:16:28.993: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 23 21:16:28.996: INFO: Number of nodes with available pods: 1 +Jun 23 21:16:28.996: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-xhlch, will wait for the garbage collector to delete the pods +Jun 23 21:16:29.058: INFO: Deleting DaemonSet.extensions daemon-set took: 5.896966ms +Jun 23 21:16:29.158: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.203769ms +Jun 23 21:17:13.861: INFO: Number of nodes with available pods: 0 +Jun 23 21:17:13.861: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 23 21:17:13.868: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-xhlch/daemonsets","resourceVersion":"2388"},"items":null} + +Jun 23 21:17:13.871: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-xhlch/pods","resourceVersion":"2388"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:17:13.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-daemonsets-xhlch" for this suite. +Jun 23 21:17:19.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:17:19.903: INFO: namespace: e2e-tests-daemonsets-xhlch, resource: bindings, ignored listing per whitelist +Jun 23 21:17:19.972: INFO: namespace e2e-tests-daemonsets-xhlch deletion completed in 6.091358696s + +• [SLOW TEST:92.109 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop simple daemon [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-apps] ReplicaSet + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:17:19.972: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Given a Pod with a 'name' label pod-adoption-release is created +STEP: When a replicaset with a matching selector is created +STEP: Then the orphan pod is adopted +STEP: When the matched label of one of its pods change +Jun 23 21:17:27.074: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:17:28.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replicaset-j2kg6" for this suite. +Jun 23 21:17:50.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:17:50.143: INFO: namespace: e2e-tests-replicaset-j2kg6, resource: bindings, ignored listing per whitelist +Jun 23 21:17:50.183: INFO: namespace e2e-tests-replicaset-j2kg6 deletion completed in 22.091745172s + +• [SLOW TEST:30.211 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should adopt matching pods on creation and release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:17:50.183: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test override all +Jun 23 21:17:50.262: INFO: Waiting up to 5m0s for pod "client-containers-5b0060df-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-containers-pgksk" to be "success or failure" +Jun 23 21:17:50.265: INFO: Pod "client-containers-5b0060df-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668065ms +Jun 23 21:17:52.270: INFO: Pod "client-containers-5b0060df-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007987972s +Jun 23 21:17:54.274: INFO: Pod "client-containers-5b0060df-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011888243s +STEP: Saw pod success +Jun 23 21:17:54.274: INFO: Pod "client-containers-5b0060df-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 21:17:54.277: INFO: Trying to get logs from node minion pod client-containers-5b0060df-95fc-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 21:17:54.296: INFO: Waiting for pod client-containers-5b0060df-95fc-11e9-9086-ba438756bc32 to disappear +Jun 23 21:17:54.302: INFO: Pod client-containers-5b0060df-95fc-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:17:54.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-containers-pgksk" for this suite. +Jun 23 21:18:00.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:18:00.367: INFO: namespace: e2e-tests-containers-pgksk, resource: bindings, ignored listing per whitelist +Jun 23 21:18:00.408: INFO: namespace e2e-tests-containers-pgksk deletion completed in 6.102843226s + +• [SLOW TEST:10.225 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be able to override the image's default command and arguments [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:18:00.408: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name s-test-opt-del-6119a3a3-95fc-11e9-9086-ba438756bc32 +STEP: Creating secret with name s-test-opt-upd-6119a42a-95fc-11e9-9086-ba438756bc32 +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-6119a3a3-95fc-11e9-9086-ba438756bc32 +STEP: Updating secret s-test-opt-upd-6119a42a-95fc-11e9-9086-ba438756bc32 +STEP: Creating secret with name s-test-opt-create-6119a468-95fc-11e9-9086-ba438756bc32 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 21:18:08.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-qvv5f" for this suite. +Jun 23 21:18:30.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 21:18:30.653: INFO: namespace: e2e-tests-projected-qvv5f, resource: bindings, ignored listing per whitelist +Jun 23 21:18:30.702: INFO: namespace e2e-tests-projected-qvv5f deletion completed in 22.09249861s + +• [SLOW TEST:30.293 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] version v1 + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 21:18:30.702: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Jun 23 21:18:30.784: INFO: (0) /api/v1/nodes/minion:10250/proxy/logs/:
+alternatives.log
+apt/
+... (200; 7.343121ms)
+Jun 23 21:18:30.789: INFO: (1) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.911066ms)
+Jun 23 21:18:30.794: INFO: (2) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.391688ms)
+Jun 23 21:18:30.798: INFO: (3) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.422587ms)
+Jun 23 21:18:30.803: INFO: (4) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.387325ms)
+Jun 23 21:18:30.807: INFO: (5) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.158394ms)
+Jun 23 21:18:30.813: INFO: (6) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 6.33823ms)
+Jun 23 21:18:30.818: INFO: (7) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.711834ms)
+Jun 23 21:18:30.822: INFO: (8) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.39891ms)
+Jun 23 21:18:30.827: INFO: (9) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.332831ms)
+Jun 23 21:18:30.831: INFO: (10) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.140988ms)
+Jun 23 21:18:30.835: INFO: (11) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.213712ms)
+Jun 23 21:18:30.839: INFO: (12) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.123218ms)
+Jun 23 21:18:30.843: INFO: (13) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.07272ms)
+Jun 23 21:18:30.847: INFO: (14) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.119352ms)
+Jun 23 21:18:30.851: INFO: (15) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 3.919358ms)
+Jun 23 21:18:30.856: INFO: (16) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.474879ms)
+Jun 23 21:18:30.860: INFO: (17) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.322887ms)
+Jun 23 21:18:30.864: INFO: (18) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.109838ms)
+Jun 23 21:18:30.868: INFO: (19) /api/v1/nodes/minion:10250/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 3.976551ms)
+[AfterEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:18:30.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-proxy-tk8mc" for this suite.
+Jun 23 21:18:36.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:18:36.942: INFO: namespace: e2e-tests-proxy-tk8mc, resource: bindings, ignored listing per whitelist
+Jun 23 21:18:36.968: INFO: namespace e2e-tests-proxy-tk8mc deletion completed in 6.096066478s
+
+• [SLOW TEST:6.266 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
+    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:18:36.968: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:18:37.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-x2jw2" to be "success or failure"
+Jun 23 21:18:37.044: INFO: Pod "downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.889951ms
+Jun 23 21:18:39.048: INFO: Pod "downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00656554s
+Jun 23 21:18:41.052: INFO: Pod "downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010177691s
+STEP: Saw pod success
+Jun 23 21:18:41.052: INFO: Pod "downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:18:41.055: INFO: Trying to get logs from node minion pod downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:18:41.078: INFO: Waiting for pod downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:18:41.081: INFO: Pod downwardapi-volume-76e259b7-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:18:41.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-x2jw2" for this suite.
+Jun 23 21:18:47.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:18:47.118: INFO: namespace: e2e-tests-projected-x2jw2, resource: bindings, ignored listing per whitelist
+Jun 23 21:18:47.179: INFO: namespace e2e-tests-projected-x2jw2 deletion completed in 6.094325857s
+
+• [SLOW TEST:10.211 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl version 
+  should check is all data is printed  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:18:47.179: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should check is all data is printed  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:18:47.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 version'
+Jun 23 21:18:47.392: INFO: stderr: ""
+Jun 23 21:18:47.392: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.0\", GitCommit:\"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\", GitTreeState:\"clean\", BuildDate:\"2018-12-03T21:04:45Z\", GoVersion:\"go1.11.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.5\", GitCommit:\"2166946f41b36dea2c4626f90a77706f426cdea2\", GitTreeState:\"clean\", BuildDate:\"2019-03-25T15:19:22Z\", GoVersion:\"go1.11.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:18:47.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-x8rrw" for this suite.
+Jun 23 21:18:53.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:18:53.457: INFO: namespace: e2e-tests-kubectl-x8rrw, resource: bindings, ignored listing per whitelist
+Jun 23 21:18:53.488: INFO: namespace e2e-tests-kubectl-x8rrw deletion completed in 6.092013378s
+
+• [SLOW TEST:6.309 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl version
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should check is all data is printed  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
+  should create an rc from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:18:53.488: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
+[It] should create an rc from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 23 21:18:53.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-slczs'
+Jun 23 21:18:53.719: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 23 21:18:53.719: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
+STEP: confirm that you can get logs from an rc
+Jun 23 21:18:53.726: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-b24tq]
+Jun 23 21:18:53.726: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-b24tq" in namespace "e2e-tests-kubectl-slczs" to be "running and ready"
+Jun 23 21:18:53.729: INFO: Pod "e2e-test-nginx-rc-b24tq": Phase="Pending", Reason="", readiness=false. Elapsed: 3.063158ms
+Jun 23 21:18:55.732: INFO: Pod "e2e-test-nginx-rc-b24tq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006544885s
+Jun 23 21:18:57.736: INFO: Pod "e2e-test-nginx-rc-b24tq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010010478s
+Jun 23 21:18:59.739: INFO: Pod "e2e-test-nginx-rc-b24tq": Phase="Running", Reason="", readiness=true. Elapsed: 6.013370036s
+Jun 23 21:18:59.739: INFO: Pod "e2e-test-nginx-rc-b24tq" satisfied condition "running and ready"
+Jun 23 21:18:59.739: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-b24tq]
+Jun 23 21:18:59.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 logs rc/e2e-test-nginx-rc --namespace=e2e-tests-kubectl-slczs'
+Jun 23 21:18:59.889: INFO: stderr: ""
+Jun 23 21:18:59.889: INFO: stdout: ""
+[AfterEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1303
+Jun 23 21:18:59.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-slczs'
+Jun 23 21:19:00.032: INFO: stderr: ""
+Jun 23 21:19:00.032: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:19:00.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-slczs" for this suite.
+Jun 23 21:19:22.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:19:22.084: INFO: namespace: e2e-tests-kubectl-slczs, resource: bindings, ignored listing per whitelist
+Jun 23 21:19:22.127: INFO: namespace e2e-tests-kubectl-slczs deletion completed in 22.091022241s
+
+• [SLOW TEST:28.639 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run rc
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create an rc from an image  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:19:22.127: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: getting the auto-created API token
+STEP: Creating a pod to test consume service account token
+Jun 23 21:19:22.709: INFO: Waiting up to 5m0s for pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj" in namespace "e2e-tests-svcaccounts-44cwk" to be "success or failure"
+Jun 23 21:19:22.712: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.708415ms
+Jun 23 21:19:24.716: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006357717s
+Jun 23 21:19:26.720: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01021898s
+STEP: Saw pod success
+Jun 23 21:19:26.720: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj" satisfied condition "success or failure"
+Jun 23 21:19:26.722: INFO: Trying to get logs from node minion pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj container token-test: 
+STEP: delete the pod
+Jun 23 21:19:26.740: INFO: Waiting for pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj to disappear
+Jun 23 21:19:26.746: INFO: Pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-r92zj no longer exists
+STEP: Creating a pod to test consume service account root CA
+Jun 23 21:19:26.749: INFO: Waiting up to 5m0s for pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v" in namespace "e2e-tests-svcaccounts-44cwk" to be "success or failure"
+Jun 23 21:19:26.752: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.678684ms
+Jun 23 21:19:28.756: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006235898s
+Jun 23 21:19:30.759: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009773479s
+STEP: Saw pod success
+Jun 23 21:19:30.759: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v" satisfied condition "success or failure"
+Jun 23 21:19:30.762: INFO: Trying to get logs from node minion pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v container root-ca-test: 
+STEP: delete the pod
+Jun 23 21:19:30.784: INFO: Waiting for pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v to disappear
+Jun 23 21:19:30.789: INFO: Pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-mlq8v no longer exists
+STEP: Creating a pod to test consume service account namespace
+Jun 23 21:19:30.793: INFO: Waiting up to 5m0s for pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk" in namespace "e2e-tests-svcaccounts-44cwk" to be "success or failure"
+Jun 23 21:19:30.796: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.856795ms
+Jun 23 21:19:32.800: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006319579s
+Jun 23 21:19:34.803: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00982566s
+STEP: Saw pod success
+Jun 23 21:19:34.803: INFO: Pod "pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk" satisfied condition "success or failure"
+Jun 23 21:19:34.806: INFO: Trying to get logs from node minion pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk container namespace-test: 
+STEP: delete the pod
+Jun 23 21:19:34.827: INFO: Waiting for pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk to disappear
+Jun 23 21:19:34.832: INFO: Pod pod-service-account-921aaff6-95fc-11e9-9086-ba438756bc32-z44rk no longer exists
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:19:34.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-svcaccounts-44cwk" for this suite.
+Jun 23 21:19:40.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:19:40.932: INFO: namespace: e2e-tests-svcaccounts-44cwk, resource: bindings, ignored listing per whitelist
+Jun 23 21:19:40.939: INFO: namespace e2e-tests-svcaccounts-44cwk deletion completed in 6.10312756s
+
+• [SLOW TEST:18.812 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
+  should mount an API token into pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:19:40.939: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-map-9d040f4f-95fc-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:19:41.018: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-gjck8" to be "success or failure"
+Jun 23 21:19:41.021: INFO: Pod "pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809619ms
+Jun 23 21:19:43.025: INFO: Pod "pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006253226s
+Jun 23 21:19:45.028: INFO: Pod "pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009782624s
+STEP: Saw pod success
+Jun 23 21:19:45.028: INFO: Pod "pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:19:45.031: INFO: Trying to get logs from node minion pod pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:19:45.048: INFO: Waiting for pod pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:19:45.051: INFO: Pod pod-projected-secrets-9d048cb1-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:19:45.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-gjck8" for this suite.
+Jun 23 21:19:51.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:19:51.112: INFO: namespace: e2e-tests-projected-gjck8, resource: bindings, ignored listing per whitelist
+Jun 23 21:19:51.146: INFO: namespace e2e-tests-projected-gjck8 deletion completed in 6.091336435s
+
+• [SLOW TEST:10.206 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[k8s.io] Pods 
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:19:51.146: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: updating the pod
+Jun 23 21:19:55.753: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a31b32e0-95fc-11e9-9086-ba438756bc32"
+Jun 23 21:19:55.753: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a31b32e0-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-pods-94hr5" to be "terminated due to deadline exceeded"
+Jun 23 21:19:55.756: INFO: Pod "pod-update-activedeadlineseconds-a31b32e0-95fc-11e9-9086-ba438756bc32": Phase="Running", Reason="", readiness=true. Elapsed: 2.968607ms
+Jun 23 21:19:57.759: INFO: Pod "pod-update-activedeadlineseconds-a31b32e0-95fc-11e9-9086-ba438756bc32": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.006195747s
+Jun 23 21:19:57.759: INFO: Pod "pod-update-activedeadlineseconds-a31b32e0-95fc-11e9-9086-ba438756bc32" satisfied condition "terminated due to deadline exceeded"
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:19:57.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-94hr5" for this suite.
+Jun 23 21:20:03.774: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:20:03.839: INFO: namespace: e2e-tests-pods-94hr5, resource: bindings, ignored listing per whitelist
+Jun 23 21:20:03.861: INFO: namespace e2e-tests-pods-94hr5 deletion completed in 6.097852147s
+
+• [SLOW TEST:12.715 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Downward API volume 
+  should set mode on item file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:20:03.861: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should set mode on item file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:20:03.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-fqgws" to be "success or failure"
+Jun 23 21:20:03.941: INFO: Pod "downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.989062ms
+Jun 23 21:20:05.945: INFO: Pod "downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006575091s
+Jun 23 21:20:07.948: INFO: Pod "downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010017293s
+STEP: Saw pod success
+Jun 23 21:20:07.948: INFO: Pod "downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:20:07.951: INFO: Trying to get logs from node minion pod downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:20:07.968: INFO: Waiting for pod downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:20:07.974: INFO: Pod downwardapi-volume-aaadc963-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:20:07.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-fqgws" for this suite.
+Jun 23 21:20:13.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:20:14.025: INFO: namespace: e2e-tests-downward-api-fqgws, resource: bindings, ignored listing per whitelist
+Jun 23 21:20:14.069: INFO: namespace e2e-tests-downward-api-fqgws deletion completed in 6.091987166s
+
+• [SLOW TEST:10.208 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should set mode on item file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:20:14.069: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jun 23 21:20:14.146: INFO: Waiting up to 5m0s for pod "pod-b0c36441-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-ppjf4" to be "success or failure"
+Jun 23 21:20:14.149: INFO: Pod "pod-b0c36441-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866093ms
+Jun 23 21:20:16.152: INFO: Pod "pod-b0c36441-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006666938s
+Jun 23 21:20:18.156: INFO: Pod "pod-b0c36441-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010186339s
+STEP: Saw pod success
+Jun 23 21:20:18.156: INFO: Pod "pod-b0c36441-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:20:18.159: INFO: Trying to get logs from node minion pod pod-b0c36441-95fc-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:20:18.176: INFO: Waiting for pod pod-b0c36441-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:20:18.181: INFO: Pod pod-b0c36441-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:20:18.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-ppjf4" for this suite.
+Jun 23 21:20:24.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:20:24.271: INFO: namespace: e2e-tests-emptydir-ppjf4, resource: bindings, ignored listing per whitelist
+Jun 23 21:20:24.275: INFO: namespace e2e-tests-emptydir-ppjf4 deletion completed in 6.090570109s
+
+• [SLOW TEST:10.206 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] Downward API volume 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:20:24.276: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating the pod
+Jun 23 21:20:28.883: INFO: Successfully updated pod "annotationupdateb6d8a915-95fc-11e9-9086-ba438756bc32"
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:20:30.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-hvz2l" for this suite.
+Jun 23 21:20:52.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:20:52.961: INFO: namespace: e2e-tests-downward-api-hvz2l, resource: bindings, ignored listing per whitelist
+Jun 23 21:20:53.000: INFO: namespace e2e-tests-downward-api-hvz2l deletion completed in 22.093215476s
+
+• [SLOW TEST:28.724 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:20:53.000: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:20:53.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-99gmn" to be "success or failure"
+Jun 23 21:20:53.084: INFO: Pod "downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697342ms
+Jun 23 21:20:55.087: INFO: Pod "downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006460996s
+Jun 23 21:20:57.091: INFO: Pod "downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010359111s
+STEP: Saw pod success
+Jun 23 21:20:57.091: INFO: Pod "downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:20:57.094: INFO: Trying to get logs from node minion pod downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:20:57.112: INFO: Waiting for pod downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:20:57.115: INFO: Pod downwardapi-volume-c7f8655b-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:20:57.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-99gmn" for this suite.
+Jun 23 21:21:03.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:21:03.183: INFO: namespace: e2e-tests-projected-99gmn, resource: bindings, ignored listing per whitelist
+Jun 23 21:21:03.210: INFO: namespace e2e-tests-projected-99gmn deletion completed in 6.091742715s
+
+• [SLOW TEST:10.210 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should set DefaultMode on files [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-auth] ServiceAccounts 
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:21:03.211: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename svcaccounts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: getting the auto-created API token
+Jun 23 21:21:03.801: INFO: created pod pod-service-account-defaultsa
+Jun 23 21:21:03.801: INFO: pod pod-service-account-defaultsa service account token volume mount: true
+Jun 23 21:21:03.805: INFO: created pod pod-service-account-mountsa
+Jun 23 21:21:03.805: INFO: pod pod-service-account-mountsa service account token volume mount: true
+Jun 23 21:21:03.814: INFO: created pod pod-service-account-nomountsa
+Jun 23 21:21:03.814: INFO: pod pod-service-account-nomountsa service account token volume mount: false
+Jun 23 21:21:03.823: INFO: created pod pod-service-account-defaultsa-mountspec
+Jun 23 21:21:03.823: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
+Jun 23 21:21:03.827: INFO: created pod pod-service-account-mountsa-mountspec
+Jun 23 21:21:03.827: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
+Jun 23 21:21:03.835: INFO: created pod pod-service-account-nomountsa-mountspec
+Jun 23 21:21:03.835: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
+Jun 23 21:21:03.844: INFO: created pod pod-service-account-defaultsa-nomountspec
+Jun 23 21:21:03.844: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
+Jun 23 21:21:03.852: INFO: created pod pod-service-account-mountsa-nomountspec
+Jun 23 21:21:03.852: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
+Jun 23 21:21:03.860: INFO: created pod pod-service-account-nomountsa-nomountspec
+Jun 23 21:21:03.861: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
+[AfterEach] [sig-auth] ServiceAccounts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:21:03.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-svcaccounts-gd75p" for this suite.
+Jun 23 21:21:25.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:21:25.914: INFO: namespace: e2e-tests-svcaccounts-gd75p, resource: bindings, ignored listing per whitelist
+Jun 23 21:21:25.963: INFO: namespace e2e-tests-svcaccounts-gd75p deletion completed in 22.098501424s
+
+• [SLOW TEST:22.753 seconds]
+[sig-auth] ServiceAccounts
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22
+  should allow opting out of API token automount  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:21:25.964: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Jun 23 21:21:26.043: INFO: Waiting up to 5m0s for pod "pod-db9dfa40-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-gl5dn" to be "success or failure"
+Jun 23 21:21:26.046: INFO: Pod "pod-db9dfa40-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.045738ms
+Jun 23 21:21:28.050: INFO: Pod "pod-db9dfa40-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006671277s
+Jun 23 21:21:30.054: INFO: Pod "pod-db9dfa40-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010364719s
+STEP: Saw pod success
+Jun 23 21:21:30.054: INFO: Pod "pod-db9dfa40-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:21:30.057: INFO: Trying to get logs from node minion pod pod-db9dfa40-95fc-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:21:30.074: INFO: Waiting for pod pod-db9dfa40-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:21:30.080: INFO: Pod pod-db9dfa40-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:21:30.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-gl5dn" for this suite.
+Jun 23 21:21:36.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:21:36.160: INFO: namespace: e2e-tests-emptydir-gl5dn, resource: bindings, ignored listing per whitelist
+Jun 23 21:21:36.175: INFO: namespace e2e-tests-emptydir-gl5dn deletion completed in 6.091533453s
+
+• [SLOW TEST:10.211 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-apps] Deployment 
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:21:36.175: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:21:36.246: INFO: Creating deployment "test-recreate-deployment"
+Jun 23 21:21:36.250: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
+Jun 23 21:21:36.256: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created
+Jun 23 21:21:38.263: INFO: Waiting deployment "test-recreate-deployment" to complete
+Jun 23 21:21:38.266: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921696, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921696, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921696, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696921696, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-5dfdcc846d\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 21:21:40.269: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
+Jun 23 21:21:40.276: INFO: Updating deployment test-recreate-deployment
+Jun 23 21:21:40.276: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Jun 23 21:21:40.336: INFO: Deployment "test-recreate-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:e2e-tests-deployment-l9l9w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9l9w/deployments/test-recreate-deployment,UID:e1b4f4e4-95fc-11e9-8956-98039b22fc2c,ResourceVersion:3359,Generation:2,CreationTimestamp:2019-06-23 21:21:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-06-23 21:21:40 +0000 UTC 2019-06-23 21:21:40 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-23 21:21:40 +0000 UTC 2019-06-23 21:21:36 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-697fbf54bf" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}
+
+Jun 23 21:21:40.340: INFO: New ReplicaSet "test-recreate-deployment-697fbf54bf" of Deployment "test-recreate-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf,GenerateName:,Namespace:e2e-tests-deployment-l9l9w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9l9w/replicasets/test-recreate-deployment-697fbf54bf,UID:e41f3e60-95fc-11e9-8956-98039b22fc2c,ResourceVersion:3357,Generation:1,CreationTimestamp:2019-06-23 21:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e1b4f4e4-95fc-11e9-8956-98039b22fc2c 0xc002442e27 0xc002442e28}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 23 21:21:40.341: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
+Jun 23 21:21:40.341: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5dfdcc846d,GenerateName:,Namespace:e2e-tests-deployment-l9l9w,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-l9l9w/replicasets/test-recreate-deployment-5dfdcc846d,UID:e1b67236-95fc-11e9-8956-98039b22fc2c,ResourceVersion:3348,Generation:2,CreationTimestamp:2019-06-23 21:21:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment e1b4f4e4-95fc-11e9-8956-98039b22fc2c 0xc002442d07 0xc002442d08}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5dfdcc846d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 23 21:21:40.344: INFO: Pod "test-recreate-deployment-697fbf54bf-rh2cn" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-697fbf54bf-rh2cn,GenerateName:test-recreate-deployment-697fbf54bf-,Namespace:e2e-tests-deployment-l9l9w,SelfLink:/api/v1/namespaces/e2e-tests-deployment-l9l9w/pods/test-recreate-deployment-697fbf54bf-rh2cn,UID:e41fc0cd-95fc-11e9-8956-98039b22fc2c,ResourceVersion:3360,Generation:0,CreationTimestamp:2019-06-23 21:21:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 697fbf54bf,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-697fbf54bf e41f3e60-95fc-11e9-8956-98039b22fc2c 0xc002443957 0xc002443958}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-jwqkc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-jwqkc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-jwqkc true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024439d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024439f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:21:40 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:21:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:21:40 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:21:40 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:21:40 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:21:40.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-l9l9w" for this suite.
+Jun 23 21:21:46.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:21:46.411: INFO: namespace: e2e-tests-deployment-l9l9w, resource: bindings, ignored listing per whitelist
+Jun 23 21:21:46.443: INFO: namespace e2e-tests-deployment-l9l9w deletion completed in 6.09428935s
+
+• [SLOW TEST:10.268 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  RecreateDeployment should delete old pods and create new ones [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:21:46.443: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Jun 23 21:21:54.555: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun 23 21:21:54.558: INFO: Pod pod-with-poststart-http-hook still exists
+Jun 23 21:21:56.558: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun 23 21:21:56.562: INFO: Pod pod-with-poststart-http-hook still exists
+Jun 23 21:21:58.558: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
+Jun 23 21:21:58.562: INFO: Pod pod-with-poststart-http-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:21:58.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-lpc2s" for this suite.
+Jun 23 21:22:20.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:22:20.630: INFO: namespace: e2e-tests-container-lifecycle-hook-lpc2s, resource: bindings, ignored listing per whitelist
+Jun 23 21:22:20.657: INFO: namespace e2e-tests-container-lifecycle-hook-lpc2s deletion completed in 22.091802046s
+
+• [SLOW TEST:34.214 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute poststart http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:22:20.658: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test override command
+Jun 23 21:22:20.734: INFO: Waiting up to 5m0s for pod "client-containers-fc37217f-95fc-11e9-9086-ba438756bc32" in namespace "e2e-tests-containers-9fzmv" to be "success or failure"
+Jun 23 21:22:20.737: INFO: Pod "client-containers-fc37217f-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.013827ms
+Jun 23 21:22:22.740: INFO: Pod "client-containers-fc37217f-95fc-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006395627s
+Jun 23 21:22:24.744: INFO: Pod "client-containers-fc37217f-95fc-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009985727s
+STEP: Saw pod success
+Jun 23 21:22:24.744: INFO: Pod "client-containers-fc37217f-95fc-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:22:24.746: INFO: Trying to get logs from node minion pod client-containers-fc37217f-95fc-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:22:24.764: INFO: Waiting for pod client-containers-fc37217f-95fc-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:22:24.769: INFO: Pod client-containers-fc37217f-95fc-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:22:24.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-containers-9fzmv" for this suite.
+Jun 23 21:22:30.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:22:30.853: INFO: namespace: e2e-tests-containers-9fzmv, resource: bindings, ignored listing per whitelist
+Jun 23 21:22:30.864: INFO: namespace e2e-tests-containers-9fzmv deletion completed in 6.090934861s
+
+• [SLOW TEST:10.206 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:22:30.864: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-024c9a5a-95fd-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:22:30.944: INFO: Waiting up to 5m0s for pod "pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-vwm84" to be "success or failure"
+Jun 23 21:22:30.947: INFO: Pod "pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694386ms
+Jun 23 21:22:32.950: INFO: Pod "pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006355808s
+Jun 23 21:22:34.954: INFO: Pod "pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009897848s
+STEP: Saw pod success
+Jun 23 21:22:34.954: INFO: Pod "pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:22:34.957: INFO: Trying to get logs from node minion pod pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:22:34.974: INFO: Waiting for pod pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:22:34.977: INFO: Pod pod-secrets-024d1c91-95fd-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:22:34.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-vwm84" for this suite.
+Jun 23 21:22:40.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:22:41.037: INFO: namespace: e2e-tests-secrets-vwm84, resource: bindings, ignored listing per whitelist
+Jun 23 21:22:41.073: INFO: namespace e2e-tests-secrets-vwm84 deletion completed in 6.092308875s
+
+• [SLOW TEST:10.209 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:22:41.073: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4plw7.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4plw7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4plw7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default;check="$$(dig +tcp +noall +answer +search kubernetes.default A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc;check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;test -n "$$(getent hosts dns-querier-1.dns-test-service.e2e-tests-dns-4plw7.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.e2e-tests-dns-4plw7.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-4plw7.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun 23 21:22:57.240: INFO: DNS probes using e2e-tests-dns-4plw7/dns-test-086355bd-95fd-11e9-9086-ba438756bc32 succeeded
+
+STEP: deleting the pod
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:22:57.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-dns-4plw7" for this suite.
+Jun 23 21:23:03.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:23:03.288: INFO: namespace: e2e-tests-dns-4plw7, resource: bindings, ignored listing per whitelist
+Jun 23 21:23:03.348: INFO: namespace e2e-tests-dns-4plw7 deletion completed in 6.090462533s
+
+• [SLOW TEST:22.274 seconds]
+[sig-network] DNS
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should provide DNS for the cluster  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:23:03.348: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name s-test-opt-del-15aa0344-95fd-11e9-9086-ba438756bc32
+STEP: Creating secret with name s-test-opt-upd-15aa03bc-95fd-11e9-9086-ba438756bc32
+STEP: Creating the pod
+STEP: Deleting secret s-test-opt-del-15aa0344-95fd-11e9-9086-ba438756bc32
+STEP: Updating secret s-test-opt-upd-15aa03bc-95fd-11e9-9086-ba438756bc32
+STEP: Creating secret with name s-test-opt-create-15aa03f6-95fd-11e9-9086-ba438756bc32
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:23:11.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-zfk76" for this suite.
+Jun 23 21:23:33.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:23:33.609: INFO: namespace: e2e-tests-secrets-zfk76, resource: bindings, ignored listing per whitelist
+Jun 23 21:23:33.644: INFO: namespace e2e-tests-secrets-zfk76 deletion completed in 22.090725959s
+
+• [SLOW TEST:30.296 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  optional updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run default 
+  should create an rc or deployment from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:23:33.644: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl run default
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1262
+[It] should create an rc or deployment from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 23 21:23:33.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-mkq47'
+Jun 23 21:23:34.328: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 23 21:23:34.328: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
+STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
+[AfterEach] [k8s.io] Kubectl run default
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1268
+Jun 23 21:23:34.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-mkq47'
+Jun 23 21:23:34.478: INFO: stderr: ""
+Jun 23 21:23:34.478: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:23:34.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-mkq47" for this suite.
+Jun 23 21:23:56.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:23:56.544: INFO: namespace: e2e-tests-kubectl-mkq47, resource: bindings, ignored listing per whitelist
+Jun 23 21:23:56.575: INFO: namespace e2e-tests-kubectl-mkq47 deletion completed in 22.093071168s
+
+• [SLOW TEST:22.931 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run default
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create an rc or deployment from an image  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:23:56.575: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-3563b511-95fd-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:23:56.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-277mc" to be "success or failure"
+Jun 23 21:23:56.662: INFO: Pod "pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.613437ms
+Jun 23 21:23:58.666: INFO: Pod "pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006109519s
+Jun 23 21:24:00.669: INFO: Pod "pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009548211s
+STEP: Saw pod success
+Jun 23 21:24:00.669: INFO: Pod "pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:24:00.672: INFO: Trying to get logs from node minion pod pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:24:00.688: INFO: Waiting for pod pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:24:00.694: INFO: Pod pod-configmaps-356437f1-95fd-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:24:00.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-277mc" for this suite.
+Jun 23 21:24:06.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:24:06.751: INFO: namespace: e2e-tests-configmap-277mc, resource: bindings, ignored listing per whitelist
+Jun 23 21:24:06.791: INFO: namespace e2e-tests-configmap-277mc deletion completed in 6.093501455s
+
+• [SLOW TEST:10.216 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:24:06.791: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-3b7a20e7-95fd-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:24:06.873: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-qqcb8" to be "success or failure"
+Jun 23 21:24:06.876: INFO: Pod "pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.175736ms
+Jun 23 21:24:08.880: INFO: Pod "pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007171998s
+Jun 23 21:24:10.884: INFO: Pod "pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010874965s
+STEP: Saw pod success
+Jun 23 21:24:10.884: INFO: Pod "pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:24:10.887: INFO: Trying to get logs from node minion pod pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:24:10.904: INFO: Waiting for pod pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:24:10.910: INFO: Pod pod-projected-configmaps-3b7aad8a-95fd-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:24:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-qqcb8" for this suite.
+Jun 23 21:24:16.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:24:16.981: INFO: namespace: e2e-tests-projected-qqcb8, resource: bindings, ignored listing per whitelist
+Jun 23 21:24:17.004: INFO: namespace e2e-tests-projected-qqcb8 deletion completed in 6.090154277s
+
+• [SLOW TEST:10.212 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:24:17.004: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:24:17.100: INFO: Creating simple daemon set daemon-set
+STEP: Check that daemon pods launch on every node of the cluster.
+Jun 23 21:24:17.106: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:17.108: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:17.108: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:18.113: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:18.116: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:18.116: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:19.113: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:19.116: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:19.116: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:20.113: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:20.116: INFO: Number of nodes with available pods: 1
+Jun 23 21:24:20.116: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Update daemon pods image.
+STEP: Check that daemon pods images are updated.
+Jun 23 21:24:20.141: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:20.144: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:21.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:21.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:22.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:22.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:23.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:23.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:24.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:24.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:25.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:25.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:26.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:26.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:27.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:27.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:28.149: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:28.154: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:29.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:29.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:30.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:30.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:31.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:31.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:32.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:32.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:33.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:33.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:34.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:34.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:35.149: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:35.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:36.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:36.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:37.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:37.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:38.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:38.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:39.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:39.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:40.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:40.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:41.149: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:41.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:42.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:42.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:43.149: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:43.153: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:44.149: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:44.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:45.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:45.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:46.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:46.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:47.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:47.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:48.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:48.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:49.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:49.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:50.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:50.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:51.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:51.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:52.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:52.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:53.148: INFO: Wrong image for pod: daemon-set-l6blv. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1.
+Jun 23 21:24:53.148: INFO: Pod daemon-set-l6blv is not available
+Jun 23 21:24:53.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:54.148: INFO: Pod daemon-set-v45qd is not available
+Jun 23 21:24:54.152: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+STEP: Check that daemon pods are still running on every node of the cluster.
+Jun 23 21:24:54.155: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:54.158: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:54.158: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:55.163: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:55.166: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:55.166: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:56.163: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:56.166: INFO: Number of nodes with available pods: 0
+Jun 23 21:24:56.166: INFO: Node minion is running more than one daemon pod
+Jun 23 21:24:57.163: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:24:57.166: INFO: Number of nodes with available pods: 1
+Jun 23 21:24:57.166: INFO: Number of running nodes: 1, number of available pods: 1
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-9xk5t, will wait for the garbage collector to delete the pods
+Jun 23 21:24:57.239: INFO: Deleting DaemonSet.extensions daemon-set took: 5.868137ms
+Jun 23 21:24:57.340: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.197372ms
+Jun 23 21:25:00.443: INFO: Number of nodes with available pods: 0
+Jun 23 21:25:00.443: INFO: Number of running nodes: 0, number of available pods: 0
+Jun 23 21:25:00.445: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-9xk5t/daemonsets","resourceVersion":"3960"},"items":null}
+
+Jun 23 21:25:00.448: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-9xk5t/pods","resourceVersion":"3960"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:25:00.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-daemonsets-9xk5t" for this suite.
+Jun 23 21:25:06.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:25:06.543: INFO: namespace: e2e-tests-daemonsets-9xk5t, resource: bindings, ignored listing per whitelist
+Jun 23 21:25:06.550: INFO: namespace e2e-tests-daemonsets-9xk5t deletion completed in 6.091472041s
+
+• [SLOW TEST:49.546 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:25:06.550: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-8ddf4
+Jun 23 21:25:10.634: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-8ddf4
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 23 21:25:10.637: INFO: Initial restart count of pod liveness-http is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:29:11.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-8ddf4" for this suite.
+Jun 23 21:29:17.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:29:17.098: INFO: namespace: e2e-tests-container-probe-8ddf4, resource: bindings, ignored listing per whitelist
+Jun 23 21:29:17.167: INFO: namespace e2e-tests-container-probe-8ddf4 deletion completed in 6.094236669s
+
+• [SLOW TEST:250.617 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:29:17.168: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy through a service and a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: starting an echo server on multiple ports
+STEP: creating replication controller proxy-service-282xv in namespace e2e-tests-proxy-7mccl
+I0623 21:29:17.254953      20 runners.go:184] Created replication controller with name: proxy-service-282xv, namespace: e2e-tests-proxy-7mccl, replica count: 1
+I0623 21:29:18.305407      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0623 21:29:19.305662      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0623 21:29:20.305875      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0623 21:29:21.306083      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+I0623 21:29:22.306306      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:23.306595      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:24.306814      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:25.307040      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:26.307253      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:27.307473      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
+I0623 21:29:28.307714      20 runners.go:184] proxy-service-282xv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
+Jun 23 21:29:28.311: INFO: setup took 11.070723694s, starting test cases
+STEP: running 16 cases, 20 attempts per case, 320 total attempts
+Jun 23 21:29:28.319: INFO: (0) /api/v1/namespaces/e2e-tests-proxy-7mccl/pods/http:proxy-service-282xv-62vsj:1080/proxy/: >> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-020255e0-95fe-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:29:39.954: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-7rcxm" to be "success or failure"
+Jun 23 21:29:39.956: INFO: Pod "pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.744575ms
+Jun 23 21:29:41.960: INFO: Pod "pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006479653s
+Jun 23 21:29:43.964: INFO: Pod "pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010290674s
+STEP: Saw pod success
+Jun 23 21:29:43.964: INFO: Pod "pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:29:43.967: INFO: Trying to get logs from node minion pod pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:29:43.987: INFO: Waiting for pod pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:29:43.990: INFO: Pod pod-projected-secrets-0202cfac-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:29:43.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-7rcxm" for this suite.
+Jun 23 21:29:50.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:29:50.080: INFO: namespace: e2e-tests-projected-7rcxm, resource: bindings, ignored listing per whitelist
+Jun 23 21:29:50.087: INFO: namespace e2e-tests-projected-7rcxm deletion completed in 6.093267704s
+
+• [SLOW TEST:10.218 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[k8s.io] [sig-node] Events 
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:29:50.087: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename events
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: retrieving the pod
+Jun 23 21:29:54.184: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-0819dc6b-95fe-11e9-9086-ba438756bc32,GenerateName:,Namespace:e2e-tests-events-nzcjn,SelfLink:/api/v1/namespaces/e2e-tests-events-nzcjn/pods/send-events-0819dc6b-95fe-11e9-9086-ba438756bc32,UID:081adbd0-95fe-11e9-8956-98039b22fc2c,ResourceVersion:4465,Generation:0,CreationTimestamp:2019-06-23 21:29:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 165524506,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-pwh2w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwh2w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pwh2w true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00263c7e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00263c800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:29:50 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:29:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:29:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:29:50 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.6,StartTime:2019-06-23 21:29:50 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-23 21:29:51 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://3e221bb6e95bb648fe79258f5f1943b31ab343d2cf91ea9bcb99e36cf6b42e0c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+
+STEP: checking for scheduler event about the pod
+Jun 23 21:29:56.193: INFO: Saw scheduler event for our pod.
+STEP: checking for kubelet event about the pod
+Jun 23 21:29:58.197: INFO: Saw kubelet event for our pod.
+STEP: deleting the pod
+[AfterEach] [k8s.io] [sig-node] Events
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:29:58.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-events-nzcjn" for this suite.
+Jun 23 21:30:36.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:30:36.233: INFO: namespace: e2e-tests-events-nzcjn, resource: bindings, ignored listing per whitelist
+Jun 23 21:30:36.297: INFO: namespace e2e-tests-events-nzcjn deletion completed in 38.091117698s
+
+• [SLOW TEST:46.210 seconds]
+[k8s.io] [sig-node] Events
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run job 
+  should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:30:36.298: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
+[It] should create a job from an image when restart is OnFailure  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 23 21:30:36.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-v4h59'
+Jun 23 21:30:36.517: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 23 21:30:36.518: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
+STEP: verifying the job e2e-test-nginx-job was created
+[AfterEach] [k8s.io] Kubectl run job
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
+Jun 23 21:30:36.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete jobs e2e-test-nginx-job --namespace=e2e-tests-kubectl-v4h59'
+Jun 23 21:30:36.648: INFO: stderr: ""
+Jun 23 21:30:36.648: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:30:36.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-v4h59" for this suite.
+Jun 23 21:30:58.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:30:58.718: INFO: namespace: e2e-tests-kubectl-v4h59, resource: bindings, ignored listing per whitelist
+Jun 23 21:30:58.749: INFO: namespace e2e-tests-kubectl-v4h59 deletion completed in 22.096479887s
+
+• [SLOW TEST:22.451 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run job
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create a job from an image when restart is OnFailure  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:30:58.749: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-31059c87-95fe-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:30:58.829: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-6zr8z" to be "success or failure"
+Jun 23 21:30:58.832: INFO: Pod "pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.901629ms
+Jun 23 21:31:00.835: INFO: Pod "pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006316925s
+Jun 23 21:31:02.839: INFO: Pod "pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009866656s
+STEP: Saw pod success
+Jun 23 21:31:02.839: INFO: Pod "pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:31:02.841: INFO: Trying to get logs from node minion pod pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:31:02.859: INFO: Waiting for pod pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:31:02.864: INFO: Pod pod-projected-configmaps-31062e7f-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:31:02.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-6zr8z" for this suite.
+Jun 23 21:31:08.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:31:08.956: INFO: namespace: e2e-tests-projected-6zr8z, resource: bindings, ignored listing per whitelist
+Jun 23 21:31:08.967: INFO: namespace e2e-tests-projected-6zr8z deletion completed in 6.099614042s
+
+• [SLOW TEST:10.219 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
+  should write entries to /etc/hosts [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:31:08.968: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should write entries to /etc/hosts [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:31:13.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubelet-test-f7gms" for this suite.
+Jun 23 21:31:55.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:31:55.167: INFO: namespace: e2e-tests-kubelet-test-f7gms, resource: bindings, ignored listing per whitelist
+Jun 23 21:31:55.172: INFO: namespace e2e-tests-kubelet-test-f7gms deletion completed in 42.09778671s
+
+• [SLOW TEST:46.204 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when scheduling a busybox Pod with hostAliases
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
+    should write entries to /etc/hosts [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:31:55.172: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:31:55.257: INFO: Creating daemon "daemon-set" with a node selector
+STEP: Initially, daemon pods should not be running on any nodes.
+Jun 23 21:31:55.269: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:55.269: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Change node label to blue, check that daemon pod is launched.
+Jun 23 21:31:55.287: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:55.288: INFO: Node minion is running more than one daemon pod
+Jun 23 21:31:56.291: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:56.291: INFO: Node minion is running more than one daemon pod
+Jun 23 21:31:57.291: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:57.291: INFO: Node minion is running more than one daemon pod
+Jun 23 21:31:58.291: INFO: Number of nodes with available pods: 1
+Jun 23 21:31:58.291: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Update the node label to green, and wait for daemons to be unscheduled
+Jun 23 21:31:58.306: INFO: Number of nodes with available pods: 1
+Jun 23 21:31:58.306: INFO: Number of running nodes: 0, number of available pods: 1
+Jun 23 21:31:59.309: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:59.309: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
+Jun 23 21:31:59.317: INFO: Number of nodes with available pods: 0
+Jun 23 21:31:59.317: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:00.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:00.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:01.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:01.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:02.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:02.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:03.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:03.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:04.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:04.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:05.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:05.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:06.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:06.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:07.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:07.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:08.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:08.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:09.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:09.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:10.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:10.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:11.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:11.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:12.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:12.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:13.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:13.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:14.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:14.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:15.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:15.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:16.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:16.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:17.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:17.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:18.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:18.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:19.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:19.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:20.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:20.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:21.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:21.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:22.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:22.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:23.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:23.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:24.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:24.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:25.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:25.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:26.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:26.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:27.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:27.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:28.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:28.322: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:29.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:29.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:30.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:30.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:31.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:31.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:32.320: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:32.320: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:33.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:33.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:34.321: INFO: Number of nodes with available pods: 0
+Jun 23 21:32:34.321: INFO: Node minion is running more than one daemon pod
+Jun 23 21:32:35.321: INFO: Number of nodes with available pods: 1
+Jun 23 21:32:35.321: INFO: Number of running nodes: 1, number of available pods: 1
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-dfp8d, will wait for the garbage collector to delete the pods
+Jun 23 21:32:35.385: INFO: Deleting DaemonSet.extensions daemon-set took: 5.592224ms
+Jun 23 21:32:35.486: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.181459ms
+Jun 23 21:33:13.789: INFO: Number of nodes with available pods: 0
+Jun 23 21:33:13.789: INFO: Number of running nodes: 0, number of available pods: 0
+Jun 23 21:33:13.791: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-dfp8d/daemonsets","resourceVersion":"4865"},"items":null}
+
+Jun 23 21:33:13.794: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-dfp8d/pods","resourceVersion":"4865"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:33:13.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-daemonsets-dfp8d" for this suite.
+Jun 23 21:33:19.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:33:19.873: INFO: namespace: e2e-tests-daemonsets-dfp8d, resource: bindings, ignored listing per whitelist
+Jun 23 21:33:19.903: INFO: namespace e2e-tests-daemonsets-dfp8d deletion completed in 6.09404887s
+
+• [SLOW TEST:84.730 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:33:19.903: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-85281bff-95fe-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:33:19.983: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-h4hb6" to be "success or failure"
+Jun 23 21:33:19.986: INFO: Pod "pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.931333ms
+Jun 23 21:33:21.990: INFO: Pod "pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006396974s
+Jun 23 21:33:23.993: INFO: Pod "pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009796821s
+STEP: Saw pod success
+Jun 23 21:33:23.993: INFO: Pod "pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:33:23.996: INFO: Trying to get logs from node minion pod pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:33:24.013: INFO: Waiting for pod pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:33:24.015: INFO: Pod pod-projected-configmaps-8528b64a-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:33:24.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-h4hb6" for this suite.
+Jun 23 21:33:30.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:33:30.090: INFO: namespace: e2e-tests-projected-h4hb6, resource: bindings, ignored listing per whitelist
+Jun 23 21:33:30.110: INFO: namespace e2e-tests-projected-h4hb6 deletion completed in 6.0914499s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:33:30.111: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:33:30.186: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-dgcmk" to be "success or failure"
+Jun 23 21:33:30.189: INFO: Pod "downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.875549ms
+Jun 23 21:33:32.192: INFO: Pod "downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006459282s
+Jun 23 21:33:34.196: INFO: Pod "downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00982289s
+STEP: Saw pod success
+Jun 23 21:33:34.196: INFO: Pod "downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:33:34.198: INFO: Trying to get logs from node minion pod downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:33:34.215: INFO: Waiting for pod downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:33:34.221: INFO: Pod downwardapi-volume-8b3d6c42-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:33:34.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-dgcmk" for this suite.
+Jun 23 21:33:40.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:33:40.291: INFO: namespace: e2e-tests-downward-api-dgcmk, resource: bindings, ignored listing per whitelist
+Jun 23 21:33:40.327: INFO: namespace e2e-tests-downward-api-dgcmk deletion completed in 6.101591025s
+
+• [SLOW TEST:10.216 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-network] Services 
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:33:40.327: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
+[It] should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating service endpoint-test2 in namespace e2e-tests-services-rsls5
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rsls5 to expose endpoints map[]
+Jun 23 21:33:40.412: INFO: Get endpoints failed (6.688393ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
+Jun 23 21:33:41.415: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rsls5 exposes endpoints map[] (1.010381912s elapsed)
+STEP: Creating pod pod1 in namespace e2e-tests-services-rsls5
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rsls5 to expose endpoints map[pod1:[80]]
+Jun 23 21:33:44.446: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rsls5 exposes endpoints map[pod1:[80]] (3.023703079s elapsed)
+STEP: Creating pod pod2 in namespace e2e-tests-services-rsls5
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rsls5 to expose endpoints map[pod1:[80] pod2:[80]]
+Jun 23 21:33:47.490: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rsls5 exposes endpoints map[pod1:[80] pod2:[80]] (3.039869922s elapsed)
+STEP: Deleting pod pod1 in namespace e2e-tests-services-rsls5
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rsls5 to expose endpoints map[pod2:[80]]
+Jun 23 21:33:48.506: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rsls5 exposes endpoints map[pod2:[80]] (1.011579183s elapsed)
+STEP: Deleting pod pod2 in namespace e2e-tests-services-rsls5
+STEP: waiting up to 3m0s for service endpoint-test2 in namespace e2e-tests-services-rsls5 to expose endpoints map[]
+Jun 23 21:33:49.517: INFO: successfully validated that service endpoint-test2 in namespace e2e-tests-services-rsls5 exposes endpoints map[] (1.006170013s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:33:49.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-services-rsls5" for this suite.
+Jun 23 21:33:55.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:33:55.621: INFO: namespace: e2e-tests-services-rsls5, resource: bindings, ignored listing per whitelist
+Jun 23 21:33:55.628: INFO: namespace e2e-tests-services-rsls5 deletion completed in 6.08982835s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90
+
+• [SLOW TEST:15.301 seconds]
+[sig-network] Services
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should serve a basic endpoint from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:33:55.628: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:33:55.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-tgcrm" to be "success or failure"
+Jun 23 21:33:55.706: INFO: Pod "downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.003526ms
+Jun 23 21:33:57.710: INFO: Pod "downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006832141s
+Jun 23 21:33:59.714: INFO: Pod "downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010502949s
+STEP: Saw pod success
+Jun 23 21:33:59.714: INFO: Pod "downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:33:59.717: INFO: Trying to get logs from node minion pod downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:33:59.740: INFO: Waiting for pod downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:33:59.743: INFO: Pod downwardapi-volume-9a730cf6-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:33:59.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-tgcrm" for this suite.
+Jun 23 21:34:05.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:34:05.784: INFO: namespace: e2e-tests-projected-tgcrm, resource: bindings, ignored listing per whitelist
+Jun 23 21:34:05.843: INFO: namespace e2e-tests-projected-tgcrm deletion completed in 6.096153966s
+
+• [SLOW TEST:10.215 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:34:05.843: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0666 on node default medium
+Jun 23 21:34:05.923: INFO: Waiting up to 5m0s for pod "pod-a08a970a-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-mkq67" to be "success or failure"
+Jun 23 21:34:05.926: INFO: Pod "pod-a08a970a-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672245ms
+Jun 23 21:34:07.929: INFO: Pod "pod-a08a970a-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006303793s
+Jun 23 21:34:09.933: INFO: Pod "pod-a08a970a-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009897966s
+STEP: Saw pod success
+Jun 23 21:34:09.933: INFO: Pod "pod-a08a970a-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:34:09.936: INFO: Trying to get logs from node minion pod pod-a08a970a-95fe-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:34:09.953: INFO: Waiting for pod pod-a08a970a-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:34:09.955: INFO: Pod pod-a08a970a-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:34:09.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-mkq67" for this suite.
+Jun 23 21:34:15.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:34:15.979: INFO: namespace: e2e-tests-emptydir-mkq67, resource: bindings, ignored listing per whitelist
+Jun 23 21:34:16.049: INFO: namespace e2e-tests-emptydir-mkq67 deletion completed in 6.090354869s
+
+• [SLOW TEST:10.206 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (non-root,0666,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-network] Services 
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:34:16.049: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
+[It] should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating service multi-endpoint-test in namespace e2e-tests-services-wgm6q
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wgm6q to expose endpoints map[]
+Jun 23 21:34:16.133: INFO: Get endpoints failed (7.432858ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
+Jun 23 21:34:17.136: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wgm6q exposes endpoints map[] (1.01071748s elapsed)
+STEP: Creating pod pod1 in namespace e2e-tests-services-wgm6q
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wgm6q to expose endpoints map[pod1:[100]]
+Jun 23 21:34:20.165: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wgm6q exposes endpoints map[pod1:[100]] (3.023023047s elapsed)
+STEP: Creating pod pod2 in namespace e2e-tests-services-wgm6q
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wgm6q to expose endpoints map[pod1:[100] pod2:[101]]
+Jun 23 21:34:23.206: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wgm6q exposes endpoints map[pod2:[101] pod1:[100]] (3.036982832s elapsed)
+STEP: Deleting pod pod1 in namespace e2e-tests-services-wgm6q
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wgm6q to expose endpoints map[pod2:[101]]
+Jun 23 21:34:24.222: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wgm6q exposes endpoints map[pod2:[101]] (1.011371946s elapsed)
+STEP: Deleting pod pod2 in namespace e2e-tests-services-wgm6q
+STEP: waiting up to 3m0s for service multi-endpoint-test in namespace e2e-tests-services-wgm6q to expose endpoints map[]
+Jun 23 21:34:25.232: INFO: successfully validated that service multi-endpoint-test in namespace e2e-tests-services-wgm6q exposes endpoints map[] (1.005771261s elapsed)
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:34:25.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-services-wgm6q" for this suite.
+Jun 23 21:34:47.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:34:47.325: INFO: namespace: e2e-tests-services-wgm6q, resource: bindings, ignored listing per whitelist
+Jun 23 21:34:47.340: INFO: namespace e2e-tests-services-wgm6q deletion completed in 22.091633009s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90
+
+• [SLOW TEST:31.290 seconds]
+[sig-network] Services
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should serve multiport endpoints from pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] ConfigMap 
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:34:47.340: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-upd-b9469776-95fe-11e9-9086-ba438756bc32
+STEP: Creating the pod
+STEP: Waiting for pod with text data
+STEP: Waiting for pod with binary data
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:34:51.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-2whst" for this suite.
+Jun 23 21:35:13.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:35:13.488: INFO: namespace: e2e-tests-configmap-2whst, resource: bindings, ignored listing per whitelist
+Jun 23 21:35:13.550: INFO: namespace e2e-tests-configmap-2whst deletion completed in 22.093663228s
+
+• [SLOW TEST:26.210 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  binary data should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:35:13.550: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-map-c8e565ee-95fe-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:35:13.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-c9zsm" to be "success or failure"
+Jun 23 21:35:13.635: INFO: Pod "pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296475ms
+Jun 23 21:35:15.638: INFO: Pod "pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006787548s
+Jun 23 21:35:17.642: INFO: Pod "pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010391439s
+STEP: Saw pod success
+Jun 23 21:35:17.642: INFO: Pod "pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:35:17.645: INFO: Trying to get logs from node minion pod pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:35:17.662: INFO: Waiting for pod pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:35:17.664: INFO: Pod pod-configmaps-c8e5ff84-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:35:17.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-c9zsm" for this suite.
+Jun 23 21:35:23.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:35:23.751: INFO: namespace: e2e-tests-configmap-c9zsm, resource: bindings, ignored listing per whitelist
+Jun 23 21:35:23.758: INFO: namespace e2e-tests-configmap-c9zsm deletion completed in 6.089656785s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:35:23.758: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:35:23.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-ls2vg" to be "success or failure"
+Jun 23 21:35:23.837: INFO: Pod "downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.820081ms
+Jun 23 21:35:25.841: INFO: Pod "downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006625809s
+Jun 23 21:35:27.845: INFO: Pod "downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010239912s
+STEP: Saw pod success
+Jun 23 21:35:27.845: INFO: Pod "downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:35:27.848: INFO: Trying to get logs from node minion pod downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:35:27.865: INFO: Waiting for pod downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:35:27.871: INFO: Pod downwardapi-volume-cefad794-95fe-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:35:27.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-ls2vg" for this suite.
+Jun 23 21:35:33.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:35:33.911: INFO: namespace: e2e-tests-downward-api-ls2vg, resource: bindings, ignored listing per whitelist
+Jun 23 21:35:33.963: INFO: namespace e2e-tests-downward-api-ls2vg deletion completed in 6.089100934s
+
+• [SLOW TEST:10.205 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:35:33.964: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a simple DaemonSet "daemon-set"
+STEP: Check that daemon pods launch on every node of the cluster.
+Jun 23 21:35:34.056: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:34.058: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:34.058: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:35.063: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:35.066: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:35.066: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:36.063: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:36.066: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:36.066: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:37.069: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:37.072: INFO: Number of nodes with available pods: 1
+Jun 23 21:35:37.072: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
+Jun 23 21:35:37.085: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:37.090: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:37.090: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:38.095: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:38.098: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:38.098: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:39.095: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:39.098: INFO: Number of nodes with available pods: 0
+Jun 23 21:35:39.098: INFO: Node minion is running more than one daemon pod
+Jun 23 21:35:40.095: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 23 21:35:40.098: INFO: Number of nodes with available pods: 1
+Jun 23 21:35:40.098: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Wait for the failed daemon pod to be completely deleted.
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace e2e-tests-daemonsets-r56j8, will wait for the garbage collector to delete the pods
+Jun 23 21:35:40.163: INFO: Deleting DaemonSet.extensions daemon-set took: 5.989111ms
+Jun 23 21:35:40.263: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.197979ms
+Jun 23 21:36:23.866: INFO: Number of nodes with available pods: 0
+Jun 23 21:36:23.866: INFO: Number of running nodes: 0, number of available pods: 0
+Jun 23 21:36:23.869: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-r56j8/daemonsets","resourceVersion":"5470"},"items":null}
+
+Jun 23 21:36:23.871: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-r56j8/pods","resourceVersion":"5470"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:36:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-daemonsets-r56j8" for this suite.
+Jun 23 21:36:29.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:36:29.967: INFO: namespace: e2e-tests-daemonsets-r56j8, resource: bindings, ignored listing per whitelist
+Jun 23 21:36:29.975: INFO: namespace e2e-tests-daemonsets-r56j8 deletion completed in 6.093744076s
+
+• [SLOW TEST:56.011 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should retry creating failed daemon pods [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:36:29.975: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-swx4p
+Jun 23 21:36:34.060: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-swx4p
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 23 21:36:34.063: INFO: Initial restart count of pod liveness-http is 0
+Jun 23 21:36:54.100: INFO: Restart count of pod e2e-tests-container-probe-swx4p/liveness-http is now 1 (20.03704964s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:36:54.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-swx4p" for this suite.
+Jun 23 21:37:00.122: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:37:00.171: INFO: namespace: e2e-tests-container-probe-swx4p, resource: bindings, ignored listing per whitelist
+Jun 23 21:37:00.208: INFO: namespace e2e-tests-container-probe-swx4p deletion completed in 6.096636015s
+
+• [SLOW TEST:30.233 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
+  should be submitted and removed  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:37:00.208: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:204
+[It] should be submitted and removed  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+STEP: verifying QOS class is set on the pod
+[AfterEach] [k8s.io] [sig-node] Pods Extended
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:37:00.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-qzsts" for this suite.
+Jun 23 21:37:22.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:37:22.371: INFO: namespace: e2e-tests-pods-qzsts, resource: bindings, ignored listing per whitelist
+Jun 23 21:37:22.385: INFO: namespace e2e-tests-pods-qzsts deletion completed in 22.090928049s
+
+• [SLOW TEST:22.176 seconds]
+[k8s.io] [sig-node] Pods Extended
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  [k8s.io] Pods Set QOS Class
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should be submitted and removed  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:37:22.385: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+STEP: setting up watch
+STEP: submitting the pod to kubernetes
+STEP: verifying the pod is in kubernetes
+STEP: verifying pod creation was observed
+Jun 23 21:37:26.478: INFO: running pod: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-submit-remove-15afb372-95ff-11e9-9086-ba438756bc32", GenerateName:"", Namespace:"e2e-tests-pods-rd5jb", SelfLink:"/api/v1/namespaces/e2e-tests-pods-rd5jb/pods/pod-submit-remove-15afb372-95ff-11e9-9086-ba438756bc32", UID:"15b1a230-95ff-11e9-8956-98039b22fc2c", ResourceVersion:"5636", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696922642, loc:(*time.Location)(0x7b33b80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"454625571"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-2drj2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001324f80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"nginx", Image:"docker.io/library/nginx:1.14-alpine", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-2drj2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil)}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001bdc098), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"minion", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022f5740), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bdc110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001bdc130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001bdc138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001bdc13c)}, Status:v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696922642, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696922644, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696922644, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696922642, loc:(*time.Location)(0x7b33b80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.197.149.12", PodIP:"10.251.128.6", StartTime:(*v1.Time)(0xc0018a4ac0), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"nginx", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0018a4b00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"nginx:1.14-alpine", ImageID:"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7", ContainerID:"docker://58231400ebc03a06a0a8d45b31f8929701669b23256054b97aa771590e13c28c"}}, QOSClass:"BestEffort"}}
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+Jun 23 21:37:31.495: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
+STEP: verifying pod deletion was observed
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:37:31.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-rd5jb" for this suite.
+Jun 23 21:37:37.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:37:37.574: INFO: namespace: e2e-tests-pods-rd5jb, resource: bindings, ignored listing per whitelist
+Jun 23 21:37:37.594: INFO: namespace e2e-tests-pods-rd5jb deletion completed in 6.09236682s
+
+• [SLOW TEST:15.209 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with projected pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:37:37.595: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with projected pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-projected-gddh
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 23 21:37:37.680: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-gddh" in namespace "e2e-tests-subpath-b7f92" to be "success or failure"
+Jun 23 21:37:37.683: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.893219ms
+Jun 23 21:37:39.686: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006506658s
+Jun 23 21:37:41.690: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 4.009958684s
+Jun 23 21:37:43.693: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 6.013274208s
+Jun 23 21:37:45.697: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 8.016923417s
+Jun 23 21:37:47.701: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 10.020747002s
+Jun 23 21:37:49.704: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 12.024398596s
+Jun 23 21:37:51.708: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 14.028323857s
+Jun 23 21:37:53.712: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 16.031830399s
+Jun 23 21:37:55.716: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 18.035830368s
+Jun 23 21:37:57.719: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 20.039498635s
+Jun 23 21:37:59.723: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Running", Reason="", readiness=false. Elapsed: 22.042958119s
+Jun 23 21:38:01.726: INFO: Pod "pod-subpath-test-projected-gddh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.046537508s
+STEP: Saw pod success
+Jun 23 21:38:01.727: INFO: Pod "pod-subpath-test-projected-gddh" satisfied condition "success or failure"
+Jun 23 21:38:01.729: INFO: Trying to get logs from node minion pod pod-subpath-test-projected-gddh container test-container-subpath-projected-gddh: 
+STEP: delete the pod
+Jun 23 21:38:01.748: INFO: Waiting for pod pod-subpath-test-projected-gddh to disappear
+Jun 23 21:38:01.751: INFO: Pod pod-subpath-test-projected-gddh no longer exists
+STEP: Deleting pod pod-subpath-test-projected-gddh
+Jun 23 21:38:01.751: INFO: Deleting pod "pod-subpath-test-projected-gddh" in namespace "e2e-tests-subpath-b7f92"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:38:01.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-b7f92" for this suite.
+Jun 23 21:38:07.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:38:07.803: INFO: namespace: e2e-tests-subpath-b7f92, resource: bindings, ignored listing per whitelist
+Jun 23 21:38:07.851: INFO: namespace e2e-tests-subpath-b7f92 deletion completed in 6.093866477s
+
+• [SLOW TEST:30.256 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with projected pod [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:38:07.851: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:38:07.922: INFO: Creating deployment "nginx-deployment"
+Jun 23 21:38:07.925: INFO: Waiting for observed generation 1
+Jun 23 21:38:09.932: INFO: Waiting for all required pods to come up
+Jun 23 21:38:09.937: INFO: Pod name nginx: Found 10 pods out of 10
+STEP: ensuring each pod is running
+Jun 23 21:38:21.944: INFO: Waiting for deployment "nginx-deployment" to complete
+Jun 23 21:38:21.951: INFO: Updating deployment "nginx-deployment" with a non-existent image
+Jun 23 21:38:21.958: INFO: Updating deployment nginx-deployment
+Jun 23 21:38:21.958: INFO: Waiting for observed generation 2
+Jun 23 21:38:23.964: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
+Jun 23 21:38:23.966: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
+Jun 23 21:38:23.969: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
+Jun 23 21:38:23.977: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
+Jun 23 21:38:23.977: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
+Jun 23 21:38:23.980: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
+Jun 23 21:38:23.985: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
+Jun 23 21:38:23.985: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
+Jun 23 21:38:23.991: INFO: Updating deployment nginx-deployment
+Jun 23 21:38:23.991: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
+Jun 23 21:38:24.001: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
+Jun 23 21:38:24.004: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Jun 23 21:38:24.019: INFO: Deployment "nginx-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w5dp6/deployments/nginx-deployment,UID:30ca473d-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5957,Generation:3,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[{Progressing True 2019-06-23 21:38:22 +0000 UTC 2019-06-23 21:38:07 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-65bbdb5f8" is progressing.} {Available False 2019-06-23 21:38:24 +0000 UTC 2019-06-23 21:38:24 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}
+
+Jun 23 21:38:24.034: INFO: New ReplicaSet "nginx-deployment-65bbdb5f8" of Deployment "nginx-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8,GenerateName:,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w5dp6/replicasets/nginx-deployment-65bbdb5f8,UID:39280987-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5951,Generation:3,CreationTimestamp:2019-06-23 21:38:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 30ca473d-95ff-11e9-8956-98039b22fc2c 0xc001818787 0xc001818788}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 23 21:38:24.034: INFO: All old ReplicaSets of Deployment "nginx-deployment":
+Jun 23 21:38:24.034: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965,GenerateName:,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-w5dp6/replicasets/nginx-deployment-555b55d965,UID:30cba851-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5948,Generation:3,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 30ca473d-95ff-11e9-8956-98039b22fc2c 0xc0018186b7 0xc0018186b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
+Jun 23 21:38:24.062: INFO: Pod "nginx-deployment-555b55d965-27gsh" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-27gsh,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-27gsh,UID:3a63a220-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5996,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024ed8e0 0xc0024ed8e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024ed950} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024ed970}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.063: INFO: Pod "nginx-deployment-555b55d965-29hj5" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-29hj5,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-29hj5,UID:30cd67de-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5880,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024ed9e0 0xc0024ed9e1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024eda50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024eda70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.6,StartTime:2019-06-23 21:38:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://abd81c31c5663fd38bedeba010ba51689f4a7a3c83e59459c44a03bb1c5a533c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.064: INFO: Pod "nginx-deployment-555b55d965-5bl75" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-5bl75,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-5bl75,UID:30d22ee6-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5872,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024edb37 0xc0024edb38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024edbb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024edbd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.7,StartTime:2019-06-23 21:38:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e507272c51a5355c76a362cc47bbf9c1e92f9949f0d500b43a8cae1e1924c68d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.065: INFO: Pod "nginx-deployment-555b55d965-8b6gv" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-8b6gv,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-8b6gv,UID:3a621c24-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5979,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024edc97 0xc0024edc98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024edd10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024edd30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.066: INFO: Pod "nginx-deployment-555b55d965-8bnm4" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-8bnm4,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-8bnm4,UID:30cf7850-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5869,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024edda0 0xc0024edda1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024ede10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024ede30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.10,StartTime:2019-06-23 21:38:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://68caf60aa80bec447090d7b2ab9a06ab90510b827dc60c27d0bd224d1bb3e38b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.066: INFO: Pod "nginx-deployment-555b55d965-8fjqh" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-8fjqh,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-8fjqh,UID:3a603f2d-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5964,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0024edef7 0xc0024edef8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0024edf70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0024edf90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.067: INFO: Pod "nginx-deployment-555b55d965-c5sxm" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-c5sxm,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-c5sxm,UID:30d22488-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5863,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0000 0xc0020d0001}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.11,StartTime:2019-06-23 21:38:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://7b5d575ab192070549815ff2db0cd9978b97df9a3805f520ac4e790e1d2f121d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.068: INFO: Pod "nginx-deployment-555b55d965-cx9m4" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-cx9m4,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-cx9m4,UID:3a63b207-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5995,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0167 0xc0020d0168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d01e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0200}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.068: INFO: Pod "nginx-deployment-555b55d965-d8qqs" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-d8qqs,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-d8qqs,UID:3a6200e7-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5973,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0270 0xc0020d0271}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d02e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.068: INFO: Pod "nginx-deployment-555b55d965-j8xq6" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-j8xq6,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-j8xq6,UID:30cf6a11-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5846,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0370 0xc0020d0371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d03e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.9,StartTime:2019-06-23 21:38:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://11705b09856281980c1a7b9ae2b33957ec0e92d922792490a86e0c1bfe532cb6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.068: INFO: Pod "nginx-deployment-555b55d965-m58b7" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-m58b7,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-m58b7,UID:3a5f9c72-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5954,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d04c7 0xc0020d04c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0540} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.068: INFO: Pod "nginx-deployment-555b55d965-mt8xk" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-mt8xk,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-mt8xk,UID:30ce15dd-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5875,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d05d0 0xc0020d05d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.13,StartTime:2019-06-23 21:38:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6053fd93f83c7caf53440c3731db4fc4d93254989d0d604c2504520c99dbfcf9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.070: INFO: Pod "nginx-deployment-555b55d965-pgllc" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-pgllc,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-pgllc,UID:3a62159f-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5977,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0727 0xc0020d0728}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d07a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d07c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.070: INFO: Pod "nginx-deployment-555b55d965-rjgq6" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-rjgq6,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-rjgq6,UID:3a603a7f-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5959,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0830 0xc0020d0831}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d08a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d08c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.071: INFO: Pod "nginx-deployment-555b55d965-rwmtl" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-rwmtl,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-rwmtl,UID:30ce2289-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5853,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0930 0xc0020d0931}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d09a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d09c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.8,StartTime:2019-06-23 21:38:07 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:10 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://8409a25729f88b6916fbdfb8affd672792f13340f7c060f365bb2d70871fcb73}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.071: INFO: Pod "nginx-deployment-555b55d965-tr75f" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-tr75f,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-tr75f,UID:3a637f20-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5990,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0a87 0xc0020d0a88}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0b00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0b20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.071: INFO: Pod "nginx-deployment-555b55d965-ttqvc" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-ttqvc,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-ttqvc,UID:3a637056-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5988,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0b90 0xc0020d0b91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0c00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.071: INFO: Pod "nginx-deployment-555b55d965-vpwhm" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-vpwhm,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-vpwhm,UID:30cf872d-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5860,Generation:0,CreationTimestamp:2019-06-23 21:38:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0c90 0xc0020d0c91}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0d00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:07 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.12,StartTime:2019-06-23 21:38:08 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-23 21:38:12 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://776ec1b695608dfa2c31b8f4bd55b2649237f632009f9689db7062e629b20da7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.071: INFO: Pod "nginx-deployment-555b55d965-w9ksh" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-w9ksh,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-w9ksh,UID:3a638a67-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5993,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0de7 0xc0020d0de8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0e60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0e80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-555b55d965-zt66s" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-555b55d965-zt66s,GenerateName:nginx-deployment-555b55d965-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-555b55d965-zt66s,UID:3a622726-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5984,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 555b55d965,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-555b55d965 30cba851-95ff-11e9-8956-98039b22fc2c 0xc0020d0ef0 0xc0020d0ef1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d0f60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d0f80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-65bbdb5f8-5lm6q" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-5lm6q,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-5lm6q,UID:3a64f261-95ff-11e9-8956-98039b22fc2c,ResourceVersion:6003,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d0ff0 0xc0020d0ff1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-65bbdb5f8-bv6mm" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-bv6mm,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-bv6mm,UID:3a639145-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5998,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1100 0xc0020d1101}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1180} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d11a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-65bbdb5f8-g5crs" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-g5crs,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-g5crs,UID:392fde33-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5946,Generation:0,CreationTimestamp:2019-06-23 21:38:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1210 0xc0020d1211}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d12b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:38:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-65bbdb5f8-g5p9x" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-g5p9x,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-g5p9x,UID:39293246-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5921,Generation:0,CreationTimestamp:2019-06-23 21:38:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1370 0xc0020d1371}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d13f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:38:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.072: INFO: Pod "nginx-deployment-65bbdb5f8-gchrg" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-gchrg,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-gchrg,UID:392e9653-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5943,Generation:0,CreationTimestamp:2019-06-23 21:38:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d14d0 0xc0020d14d1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:22 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:38:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-gq4wj" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-gq4wj,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-gq4wj,UID:3a638538-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5994,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1630 0xc0020d1631}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d16b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d16d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-p4dzd" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-p4dzd,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-p4dzd,UID:39293bac-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5930,Generation:0,CreationTimestamp:2019-06-23 21:38:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1740 0xc0020d1741}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d17c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d17e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:38:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-p8jv9" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-p8jv9,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-p8jv9,UID:3a639fcf-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5997,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d18a0 0xc0020d18a1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-q9bjq" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-q9bjq,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-q9bjq,UID:3a6201ee-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5970,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d19b0 0xc0020d19b1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1a30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1a50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-qcfbc" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-qcfbc,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-qcfbc,UID:3a63725d-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5989,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1ac0 0xc0020d1ac1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-rt7qm" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-rt7qm,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-rt7qm,UID:3a6076c5-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5969,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1bd0 0xc0020d1bd1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1c50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-rt9fk" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-rt9fk,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-rt9fk,UID:39288f7c-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5915,Generation:0,CreationTimestamp:2019-06-23 21:38:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1ce0 0xc0020d1ce1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:21 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:,StartTime:2019-06-23 21:38:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 23 21:38:24.073: INFO: Pod "nginx-deployment-65bbdb5f8-sxbp6" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-65bbdb5f8-sxbp6,GenerateName:nginx-deployment-65bbdb5f8-,Namespace:e2e-tests-deployment-w5dp6,SelfLink:/api/v1/namespaces/e2e-tests-deployment-w5dp6/pods/nginx-deployment-65bbdb5f8-sxbp6,UID:3a6208b8-95ff-11e9-8956-98039b22fc2c,ResourceVersion:5972,Generation:0,CreationTimestamp:2019-06-23 21:38:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 65bbdb5f8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-65bbdb5f8 39280987-95ff-11e9-8956-98039b22fc2c 0xc0020d1e40 0xc0020d1e41}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-8tcd5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8tcd5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-8tcd5 true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0020d1ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0020d1ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 21:38:24 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:38:24.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-w5dp6" for this suite.
+Jun 23 21:38:30.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:38:30.128: INFO: namespace: e2e-tests-deployment-w5dp6, resource: bindings, ignored listing per whitelist
+Jun 23 21:38:30.176: INFO: namespace e2e-tests-deployment-w5dp6 deletion completed in 6.098331687s
+
+• [SLOW TEST:22.325 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:38:30.176: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
+[It] should do a rolling update of a replication controller  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the initial replication controller
+Jun 23 21:38:30.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:31.293: INFO: stderr: ""
+Jun 23 21:38:31.293: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 23 21:38:31.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:31.420: INFO: stderr: ""
+Jun 23 21:38:31.420: INFO: stdout: "update-demo-nautilus-cvfx7 update-demo-nautilus-q5ms2 "
+Jun 23 21:38:31.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-cvfx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:31.557: INFO: stderr: ""
+Jun 23 21:38:31.557: INFO: stdout: ""
+Jun 23 21:38:31.557: INFO: update-demo-nautilus-cvfx7 is created but not running
+Jun 23 21:38:36.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:36.713: INFO: stderr: ""
+Jun 23 21:38:36.713: INFO: stdout: "update-demo-nautilus-cvfx7 update-demo-nautilus-q5ms2 "
+Jun 23 21:38:36.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-cvfx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:36.859: INFO: stderr: ""
+Jun 23 21:38:36.859: INFO: stdout: ""
+Jun 23 21:38:36.859: INFO: update-demo-nautilus-cvfx7 is created but not running
+Jun 23 21:38:41.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:42.007: INFO: stderr: ""
+Jun 23 21:38:42.007: INFO: stdout: "update-demo-nautilus-cvfx7 update-demo-nautilus-q5ms2 "
+Jun 23 21:38:42.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-cvfx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:42.155: INFO: stderr: ""
+Jun 23 21:38:42.155: INFO: stdout: ""
+Jun 23 21:38:42.155: INFO: update-demo-nautilus-cvfx7 is created but not running
+Jun 23 21:38:47.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:47.283: INFO: stderr: ""
+Jun 23 21:38:47.283: INFO: stdout: "update-demo-nautilus-cvfx7 update-demo-nautilus-q5ms2 "
+Jun 23 21:38:47.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-cvfx7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:47.419: INFO: stderr: ""
+Jun 23 21:38:47.419: INFO: stdout: "true"
+Jun 23 21:38:47.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-cvfx7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:47.554: INFO: stderr: ""
+Jun 23 21:38:47.554: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 23 21:38:47.554: INFO: validating pod update-demo-nautilus-cvfx7
+Jun 23 21:38:47.561: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 23 21:38:47.561: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 23 21:38:47.561: INFO: update-demo-nautilus-cvfx7 is verified up and running
+Jun 23 21:38:47.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-q5ms2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:47.704: INFO: stderr: ""
+Jun 23 21:38:47.704: INFO: stdout: "true"
+Jun 23 21:38:47.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-q5ms2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:38:47.842: INFO: stderr: ""
+Jun 23 21:38:47.842: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 23 21:38:47.842: INFO: validating pod update-demo-nautilus-q5ms2
+Jun 23 21:38:47.849: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 23 21:38:47.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 23 21:38:47.849: INFO: update-demo-nautilus-q5ms2 is verified up and running
+STEP: rolling-update to new replication controller
+Jun 23 21:38:47.852: INFO: scanned /root for discovery docs: 
+Jun 23 21:38:47.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:10.288: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun 23 21:39:10.288: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 23 21:39:10.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:10.440: INFO: stderr: ""
+Jun 23 21:39:10.440: INFO: stdout: "update-demo-kitten-jz77l update-demo-kitten-qnpgq "
+Jun 23 21:39:10.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-kitten-jz77l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:10.582: INFO: stderr: ""
+Jun 23 21:39:10.582: INFO: stdout: "true"
+Jun 23 21:39:10.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-kitten-jz77l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:10.712: INFO: stderr: ""
+Jun 23 21:39:10.712: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun 23 21:39:10.712: INFO: validating pod update-demo-kitten-jz77l
+Jun 23 21:39:10.719: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun 23 21:39:10.719: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun 23 21:39:10.719: INFO: update-demo-kitten-jz77l is verified up and running
+Jun 23 21:39:10.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-kitten-qnpgq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:10.868: INFO: stderr: ""
+Jun 23 21:39:10.868: INFO: stdout: "true"
+Jun 23 21:39:10.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-kitten-qnpgq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-h4vl9'
+Jun 23 21:39:11.022: INFO: stderr: ""
+Jun 23 21:39:11.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
+Jun 23 21:39:11.022: INFO: validating pod update-demo-kitten-qnpgq
+Jun 23 21:39:11.029: INFO: got data: {
+  "image": "kitten.jpg"
+}
+
+Jun 23 21:39:11.029: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
+Jun 23 21:39:11.029: INFO: update-demo-kitten-qnpgq is verified up and running
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:39:11.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-h4vl9" for this suite.
+Jun 23 21:39:41.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:39:41.102: INFO: namespace: e2e-tests-kubectl-h4vl9, resource: bindings, ignored listing per whitelist
+Jun 23 21:39:41.123: INFO: namespace e2e-tests-kubectl-h4vl9 deletion completed in 30.090312846s
+
+• [SLOW TEST:70.947 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Update Demo
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should do a rolling update of a replication controller  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:39:41.124: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename namespaces
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a test namespace
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a service in the namespace
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Verifying there is no service in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:39:47.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-namespaces-hdwhv" for this suite.
+Jun 23 21:39:53.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:39:53.356: INFO: namespace: e2e-tests-namespaces-hdwhv, resource: bindings, ignored listing per whitelist
+Jun 23 21:39:53.363: INFO: namespace e2e-tests-namespaces-hdwhv deletion completed in 6.090678339s
+STEP: Destroying namespace "e2e-tests-nsdeletetest-87pqd" for this suite.
+Jun 23 21:39:53.366: INFO: Namespace e2e-tests-nsdeletetest-87pqd was already deleted
+STEP: Destroying namespace "e2e-tests-nsdeletetest-zcrpd" for this suite.
+Jun 23 21:39:59.376: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:39:59.404: INFO: namespace: e2e-tests-nsdeletetest-zcrpd, resource: bindings, ignored listing per whitelist
+Jun 23 21:39:59.458: INFO: namespace e2e-tests-nsdeletetest-zcrpd deletion completed in 6.091753017s
+
+• [SLOW TEST:18.334 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should ensure that all services are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:39:59.458: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:39:59.542: INFO: Requires at least 2 nodes (not -1)
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+Jun 23 21:39:59.549: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/e2e-tests-daemonsets-6gms9/daemonsets","resourceVersion":"6481"},"items":null}
+
+Jun 23 21:39:59.551: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/e2e-tests-daemonsets-6gms9/pods","resourceVersion":"6481"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:39:59.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-daemonsets-6gms9" for this suite.
+Jun 23 21:40:05.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:40:05.650: INFO: namespace: e2e-tests-daemonsets-6gms9, resource: bindings, ignored listing per whitelist
+Jun 23 21:40:05.650: INFO: namespace e2e-tests-daemonsets-6gms9 deletion completed in 6.089396912s
+
+S [SKIPPING] [6.192 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should rollback without unnecessary restarts [Conformance] [It]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+
+  Jun 23 21:39:59.542: Requires at least 2 nodes (not -1)
+
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:292
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:40:05.651: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295
+[It] should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating a replication controller
+Jun 23 21:40:05.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:05.979: INFO: stderr: ""
+Jun 23 21:40:05.979: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 23 21:40:05.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:06.110: INFO: stderr: ""
+Jun 23 21:40:06.110: INFO: stdout: "update-demo-nautilus-dzjf5 update-demo-nautilus-xztm7 "
+Jun 23 21:40:06.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-dzjf5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:06.231: INFO: stderr: ""
+Jun 23 21:40:06.231: INFO: stdout: ""
+Jun 23 21:40:06.231: INFO: update-demo-nautilus-dzjf5 is created but not running
+Jun 23 21:40:11.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:11.378: INFO: stderr: ""
+Jun 23 21:40:11.378: INFO: stdout: "update-demo-nautilus-dzjf5 update-demo-nautilus-xztm7 "
+Jun 23 21:40:11.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-dzjf5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:11.490: INFO: stderr: ""
+Jun 23 21:40:11.490: INFO: stdout: "true"
+Jun 23 21:40:11.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-dzjf5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:11.630: INFO: stderr: ""
+Jun 23 21:40:11.630: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 23 21:40:11.630: INFO: validating pod update-demo-nautilus-dzjf5
+Jun 23 21:40:11.636: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 23 21:40:11.637: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 23 21:40:11.637: INFO: update-demo-nautilus-dzjf5 is verified up and running
+Jun 23 21:40:11.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-xztm7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:11.769: INFO: stderr: ""
+Jun 23 21:40:11.769: INFO: stdout: "true"
+Jun 23 21:40:11.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-xztm7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:11.922: INFO: stderr: ""
+Jun 23 21:40:11.922: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 23 21:40:11.922: INFO: validating pod update-demo-nautilus-xztm7
+Jun 23 21:40:11.929: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 23 21:40:11.929: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 23 21:40:11.929: INFO: update-demo-nautilus-xztm7 is verified up and running
+STEP: using delete to clean up resources
+Jun 23 21:40:11.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:12.040: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun 23 21:40:12.040: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Jun 23 21:40:12.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-rjvrn'
+Jun 23 21:40:12.185: INFO: stderr: "No resources found.\n"
+Jun 23 21:40:12.185: INFO: stdout: ""
+Jun 23 21:40:12.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -l name=update-demo --namespace=e2e-tests-kubectl-rjvrn -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 23 21:40:12.341: INFO: stderr: ""
+Jun 23 21:40:12.341: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:40:12.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-rjvrn" for this suite.
+Jun 23 21:40:34.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:40:34.424: INFO: namespace: e2e-tests-kubectl-rjvrn, resource: bindings, ignored listing per whitelist
+Jun 23 21:40:34.438: INFO: namespace e2e-tests-kubectl-rjvrn deletion completed in 22.092982116s
+
+• [SLOW TEST:28.787 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Update Demo
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create and stop a replication controller  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-node] Downward API 
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:40:34.438: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward api env vars
+Jun 23 21:40:34.516: INFO: Waiting up to 5m0s for pod "downward-api-88290a32-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-fsvpw" to be "success or failure"
+Jun 23 21:40:34.519: INFO: Pod "downward-api-88290a32-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.935247ms
+Jun 23 21:40:36.523: INFO: Pod "downward-api-88290a32-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006648803s
+Jun 23 21:40:38.526: INFO: Pod "downward-api-88290a32-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010446089s
+STEP: Saw pod success
+Jun 23 21:40:38.526: INFO: Pod "downward-api-88290a32-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:40:38.529: INFO: Trying to get logs from node minion pod downward-api-88290a32-95ff-11e9-9086-ba438756bc32 container dapi-container: 
+STEP: delete the pod
+Jun 23 21:40:38.550: INFO: Waiting for pod downward-api-88290a32-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:40:38.556: INFO: Pod downward-api-88290a32-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:40:38.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-fsvpw" for this suite.
+Jun 23 21:40:44.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:40:44.620: INFO: namespace: e2e-tests-downward-api-fsvpw, resource: bindings, ignored listing per whitelist
+Jun 23 21:40:44.651: INFO: namespace e2e-tests-downward-api-fsvpw deletion completed in 6.091743734s
+
+• [SLOW TEST:10.213 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide host IP as an env var [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:40:44.651: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:40:44.728: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-2h7wr" to be "success or failure"
+Jun 23 21:40:44.731: INFO: Pod "downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.966931ms
+Jun 23 21:40:46.735: INFO: Pod "downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006540333s
+Jun 23 21:40:48.738: INFO: Pod "downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010297366s
+STEP: Saw pod success
+Jun 23 21:40:48.738: INFO: Pod "downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:40:48.742: INFO: Trying to get logs from node minion pod downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:40:48.759: INFO: Waiting for pod downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:40:48.762: INFO: Pod downwardapi-volume-8e3f49f5-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:40:48.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-2h7wr" for this suite.
+Jun 23 21:40:54.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:40:54.829: INFO: namespace: e2e-tests-projected-2h7wr, resource: bindings, ignored listing per whitelist
+Jun 23 21:40:54.856: INFO: namespace e2e-tests-projected-2h7wr deletion completed in 6.090545074s
+
+• [SLOW TEST:10.204 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:40:54.856: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-map-94552e5a-95ff-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:40:54.940: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-bvxz9" to be "success or failure"
+Jun 23 21:40:54.943: INFO: Pod "pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701141ms
+Jun 23 21:40:56.947: INFO: Pod "pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00638157s
+Jun 23 21:40:58.951: INFO: Pod "pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010135397s
+STEP: Saw pod success
+Jun 23 21:40:58.951: INFO: Pod "pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:40:58.954: INFO: Trying to get logs from node minion pod pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:40:58.979: INFO: Waiting for pod pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:40:58.985: INFO: Pod pod-projected-secrets-9455aa0a-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:40:58.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-bvxz9" for this suite.
+Jun 23 21:41:04.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:05.055: INFO: namespace: e2e-tests-projected-bvxz9, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:05.078: INFO: namespace e2e-tests-projected-bvxz9 deletion completed in 6.089735655s
+
+• [SLOW TEST:10.222 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:05.078: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test env composition
+Jun 23 21:41:05.155: INFO: Waiting up to 5m0s for pod "var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-var-expansion-vn9kq" to be "success or failure"
+Jun 23 21:41:05.158: INFO: Pod "var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.564199ms
+Jun 23 21:41:07.161: INFO: Pod "var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006185551s
+Jun 23 21:41:09.165: INFO: Pod "var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009721223s
+STEP: Saw pod success
+Jun 23 21:41:09.165: INFO: Pod "var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:41:09.168: INFO: Trying to get logs from node minion pod var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32 container dapi-container: 
+STEP: delete the pod
+Jun 23 21:41:09.185: INFO: Waiting for pod var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:41:09.188: INFO: Pod var-expansion-9a6c42a4-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:41:09.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-var-expansion-vn9kq" for this suite.
+Jun 23 21:41:15.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:15.247: INFO: namespace: e2e-tests-var-expansion-vn9kq, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:15.292: INFO: namespace e2e-tests-var-expansion-vn9kq deletion completed in 6.100378788s
+
+• [SLOW TEST:10.214 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should allow composing env vars into new env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-apps] ReplicaSet 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:15.292: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:41:15.362: INFO: Creating ReplicaSet my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32
+Jun 23 21:41:15.368: INFO: Pod name my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32: Found 0 pods out of 1
+Jun 23 21:41:20.372: INFO: Pod name my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32: Found 1 pods out of 1
+Jun 23 21:41:20.373: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32" is running
+Jun 23 21:41:20.376: INFO: Pod "my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32-7dcsb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 21:41:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 21:41:17 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 21:41:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 21:41:15 +0000 UTC Reason: Message:}])
+Jun 23 21:41:20.376: INFO: Trying to dial the pod
+Jun 23 21:41:25.389: INFO: Controller my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32: Got expected result from replica 1 [my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32-7dcsb]: "my-hostname-basic-a082a540-95ff-11e9-9086-ba438756bc32-7dcsb", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:41:25.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-replicaset-wbqpg" for this suite.
+Jun 23 21:41:31.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:31.445: INFO: namespace: e2e-tests-replicaset-wbqpg, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:31.490: INFO: namespace e2e-tests-replicaset-wbqpg deletion completed in 6.097289132s
+
+• [SLOW TEST:16.198 seconds]
+[sig-apps] ReplicaSet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:31.490: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+Jun 23 21:41:32.630: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 248
+	[quantile=0.9] = 142485
+	[quantile=0.99] = 235192
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 28663
+	[quantile=0.9] = 915897
+	[quantile=0.99] = 999189
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = 11
+	[quantile=0.9] = 11
+	[quantile=0.99] = 11
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = 26624
+	[quantile=0.9] = 26624
+	[quantile=0.99] = 26624
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 9
+	[quantile=0.99] = 35
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 19
+	[quantile=0.9] = 32
+	[quantile=0.99] = 52
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 20
+	[quantile=0.9] = 26
+	[quantile=0.99] = 32
+For namespace_queue_latency_sum:
+	[] = 3290
+For namespace_queue_latency_count:
+	[] = 161
+For namespace_retries:
+	[] = 164
+For namespace_work_duration:
+	[quantile=0.5] = 183027
+	[quantile=0.9] = 234284
+	[quantile=0.99] = 309089
+For namespace_work_duration_sum:
+	[] = 20488431
+For namespace_work_duration_count:
+	[] = 161
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:41:32.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-7p2cq" for this suite.
+Jun 23 21:41:38.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:38.698: INFO: namespace: e2e-tests-gc-7p2cq, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:38.725: INFO: namespace e2e-tests-gc-7p2cq deletion completed in 6.091620511s
+
+• [SLOW TEST:7.235 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:38.726: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-map-ae7967e1-95ff-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:41:38.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-4kbxw" to be "success or failure"
+Jun 23 21:41:38.802: INFO: Pod "pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.834158ms
+Jun 23 21:41:40.805: INFO: Pod "pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006477204s
+Jun 23 21:41:42.809: INFO: Pod "pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010197334s
+STEP: Saw pod success
+Jun 23 21:41:42.809: INFO: Pod "pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:41:42.812: INFO: Trying to get logs from node minion pod pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:41:42.830: INFO: Waiting for pod pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:41:42.832: INFO: Pod pod-configmaps-ae79e515-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:41:42.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-4kbxw" for this suite.
+Jun 23 21:41:48.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:48.904: INFO: namespace: e2e-tests-configmap-4kbxw, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:48.926: INFO: namespace e2e-tests-configmap-4kbxw deletion completed in 6.090194866s
+
+• [SLOW TEST:10.200 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:48.926: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-b48eebf2-95ff-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:41:49.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-n9h7v" to be "success or failure"
+Jun 23 21:41:49.009: INFO: Pod "pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.590609ms
+Jun 23 21:41:51.013: INFO: Pod "pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006235592s
+Jun 23 21:41:53.016: INFO: Pod "pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009842827s
+STEP: Saw pod success
+Jun 23 21:41:53.016: INFO: Pod "pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:41:53.019: INFO: Trying to get logs from node minion pod pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:41:53.036: INFO: Waiting for pod pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:41:53.042: INFO: Pod pod-configmaps-b48f715d-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:41:53.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-n9h7v" for this suite.
+Jun 23 21:41:59.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:41:59.127: INFO: namespace: e2e-tests-configmap-n9h7v, resource: bindings, ignored listing per whitelist
+Jun 23 21:41:59.136: INFO: namespace e2e-tests-configmap-n9h7v deletion completed in 6.090606614s
+
+• [SLOW TEST:10.210 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on default medium should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:41:59.136: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on default medium should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir volume type on node default medium
+Jun 23 21:41:59.212: INFO: Waiting up to 5m0s for pod "pod-baa4a732-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-8h56n" to be "success or failure"
+Jun 23 21:41:59.214: INFO: Pod "pod-baa4a732-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.649918ms
+Jun 23 21:42:01.218: INFO: Pod "pod-baa4a732-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005961661s
+Jun 23 21:42:03.221: INFO: Pod "pod-baa4a732-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009431989s
+STEP: Saw pod success
+Jun 23 21:42:03.221: INFO: Pod "pod-baa4a732-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:42:03.224: INFO: Trying to get logs from node minion pod pod-baa4a732-95ff-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:42:03.241: INFO: Waiting for pod pod-baa4a732-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:42:03.247: INFO: Pod pod-baa4a732-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:03.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-8h56n" for this suite.
+Jun 23 21:42:09.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:42:09.322: INFO: namespace: e2e-tests-emptydir-8h56n, resource: bindings, ignored listing per whitelist
+Jun 23 21:42:09.346: INFO: namespace e2e-tests-emptydir-8h56n deletion completed in 6.095464269s
+
+• [SLOW TEST:10.210 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  volume on default medium should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:42:09.346: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-c0baf69c-95ff-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:42:09.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-746bz" to be "success or failure"
+Jun 23 21:42:09.431: INFO: Pod "pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.920475ms
+Jun 23 21:42:11.435: INFO: Pod "pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00672559s
+Jun 23 21:42:13.439: INFO: Pod "pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010321854s
+STEP: Saw pod success
+Jun 23 21:42:13.439: INFO: Pod "pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:42:13.441: INFO: Trying to get logs from node minion pod pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:42:13.458: INFO: Waiting for pod pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:42:13.464: INFO: Pod pod-configmaps-c0bb88a4-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:13.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-746bz" for this suite.
+Jun 23 21:42:19.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:42:19.494: INFO: namespace: e2e-tests-configmap-746bz, resource: bindings, ignored listing per whitelist
+Jun 23 21:42:19.559: INFO: namespace e2e-tests-configmap-746bz deletion completed in 6.091699075s
+
+• [SLOW TEST:10.213 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:42:19.560: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:42:19.639: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-gjfms" to be "success or failure"
+Jun 23 21:42:19.642: INFO: Pod "downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.819343ms
+Jun 23 21:42:21.646: INFO: Pod "downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006333822s
+Jun 23 21:42:23.649: INFO: Pod "downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009910378s
+STEP: Saw pod success
+Jun 23 21:42:23.649: INFO: Pod "downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:42:23.653: INFO: Trying to get logs from node minion pod downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:42:23.669: INFO: Waiting for pod downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:42:23.672: INFO: Pod downwardapi-volume-c6d11133-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:23.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-gjfms" for this suite.
+Jun 23 21:42:29.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:42:29.756: INFO: namespace: e2e-tests-projected-gjfms, resource: bindings, ignored listing per whitelist
+Jun 23 21:42:29.767: INFO: namespace e2e-tests-projected-gjfms deletion completed in 6.091540687s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:42:29.767: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test override arguments
+Jun 23 21:42:29.845: INFO: Waiting up to 5m0s for pod "client-containers-cce6f822-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-containers-c7jrd" to be "success or failure"
+Jun 23 21:42:29.848: INFO: Pod "client-containers-cce6f822-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.643024ms
+Jun 23 21:42:31.852: INFO: Pod "client-containers-cce6f822-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006084887s
+Jun 23 21:42:33.855: INFO: Pod "client-containers-cce6f822-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009504528s
+STEP: Saw pod success
+Jun 23 21:42:33.855: INFO: Pod "client-containers-cce6f822-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:42:33.858: INFO: Trying to get logs from node minion pod client-containers-cce6f822-95ff-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:42:33.875: INFO: Waiting for pod client-containers-cce6f822-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:42:33.877: INFO: Pod client-containers-cce6f822-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:33.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-containers-c7jrd" for this suite.
+Jun 23 21:42:39.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:42:39.971: INFO: namespace: e2e-tests-containers-c7jrd, resource: bindings, ignored listing per whitelist
+Jun 23 21:42:39.991: INFO: namespace e2e-tests-containers-c7jrd deletion completed in 6.110258022s
+
+• [SLOW TEST:10.224 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:42:39.992: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-map-d2ff3ec2-95ff-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:42:40.074: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-x6q6s" to be "success or failure"
+Jun 23 21:42:40.077: INFO: Pod "pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.909733ms
+Jun 23 21:42:42.081: INFO: Pod "pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006404451s
+Jun 23 21:42:44.084: INFO: Pod "pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009714643s
+STEP: Saw pod success
+Jun 23 21:42:44.084: INFO: Pod "pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:42:44.087: INFO: Trying to get logs from node minion pod pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:42:44.103: INFO: Waiting for pod pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:42:44.106: INFO: Pod pod-projected-configmaps-d2ffc382-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:44.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-x6q6s" for this suite.
+Jun 23 21:42:50.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:42:50.128: INFO: namespace: e2e-tests-projected-x6q6s, resource: bindings, ignored listing per whitelist
+Jun 23 21:42:50.201: INFO: namespace e2e-tests-projected-x6q6s deletion completed in 6.091503698s
+
+• [SLOW TEST:10.209 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-network] DNS 
+  should provide DNS for services  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:42:50.201: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename dns
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide DNS for services  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a test headless service
+STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8gdwx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8gdwx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 254.6.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.6.254_udp@PTR;check="$$(dig +tcp +noall +answer +search 254.6.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.6.254_tcp@PTR;sleep 1; done
+
+STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8gdwx;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx;check="$$(dig +notcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.e2e-tests-dns-8gdwx.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.e2e-tests-dns-8gdwx.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".e2e-tests-dns-8gdwx.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 254.6.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.6.254_udp@PTR;check="$$(dig +tcp +noall +answer +search 254.6.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.6.254_tcp@PTR;sleep 1; done
+
+STEP: creating a pod to probe DNS
+STEP: submitting the pod to kubernetes
+STEP: retrieving the pod
+STEP: looking for the results for each expected name from probers
+Jun 23 21:42:54.317: INFO: Unable to read wheezy_udp@dns-test-service from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.320: INFO: Unable to read wheezy_tcp@dns-test-service from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.328: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.337: INFO: Unable to read wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.342: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.346: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.371: INFO: Unable to read jessie_udp@dns-test-service from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.374: INFO: Unable to read jessie_tcp@dns-test-service from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.377: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8gdwx from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.381: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.384: INFO: Unable to read jessie_udp@dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.388: INFO: Unable to read jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.391: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.395: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc from pod e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32: the server could not find the requested resource (get pods dns-test-d9168f79-95ff-11e9-9086-ba438756bc32)
+Jun 23 21:42:54.415: INFO: Lookups using e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx wheezy_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc wheezy_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc wheezy_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.e2e-tests-dns-8gdwx jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx jessie_udp@dns-test-service.e2e-tests-dns-8gdwx.svc jessie_tcp@dns-test-service.e2e-tests-dns-8gdwx.svc jessie_udp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc jessie_tcp@_http._tcp.dns-test-service.e2e-tests-dns-8gdwx.svc]
+
+Jun 23 21:42:59.513: INFO: DNS probes using e2e-tests-dns-8gdwx/dns-test-d9168f79-95ff-11e9-9086-ba438756bc32 succeeded
+
+STEP: deleting the pod
+STEP: deleting the test service
+STEP: deleting the test headless service
+[AfterEach] [sig-network] DNS
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:42:59.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-dns-8gdwx" for this suite.
+Jun 23 21:43:05.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:43:05.646: INFO: namespace: e2e-tests-dns-8gdwx, resource: bindings, ignored listing per whitelist
+Jun 23 21:43:05.653: INFO: namespace e2e-tests-dns-8gdwx deletion completed in 6.099491408s
+
+• [SLOW TEST:15.452 seconds]
+[sig-network] DNS
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should provide DNS for services  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:43:05.653: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:43:05.731: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-hfd5p" to be "success or failure"
+Jun 23 21:43:05.734: INFO: Pod "downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.863158ms
+Jun 23 21:43:07.738: INFO: Pod "downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006527923s
+Jun 23 21:43:09.742: INFO: Pod "downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010122941s
+STEP: Saw pod success
+Jun 23 21:43:09.742: INFO: Pod "downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:43:09.745: INFO: Trying to get logs from node minion pod downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:43:09.762: INFO: Waiting for pod downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:43:09.767: INFO: Pod downwardapi-volume-e24a981b-95ff-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:43:09.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-hfd5p" for this suite.
+Jun 23 21:43:15.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:43:15.810: INFO: namespace: e2e-tests-projected-hfd5p, resource: bindings, ignored listing per whitelist
+Jun 23 21:43:15.866: INFO: namespace e2e-tests-projected-hfd5p deletion completed in 6.095520777s
+
+• [SLOW TEST:10.213 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:43:15.867: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:43:15.944: INFO: Pod name cleanup-pod: Found 0 pods out of 1
+Jun 23 21:43:20.947: INFO: Pod name cleanup-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jun 23 21:43:20.948: INFO: Creating deployment test-cleanup-deployment
+STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Jun 23 21:43:20.965: INFO: Deployment "test-cleanup-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:e2e-tests-deployment-9xmct,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-9xmct/deployments/test-cleanup-deployment,UID:eb5f0230-95ff-11e9-8956-98039b22fc2c,ResourceVersion:7234,Generation:1,CreationTimestamp:2019-06-23 21:43:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}
+
+Jun 23 21:43:20.968: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:43:20.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-9xmct" for this suite.
+Jun 23 21:43:26.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:43:27.038: INFO: namespace: e2e-tests-deployment-9xmct, resource: bindings, ignored listing per whitelist
+Jun 23 21:43:27.077: INFO: namespace e2e-tests-deployment-9xmct deletion completed in 6.102040833s
+
+• [SLOW TEST:11.210 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  deployment should delete old replica sets [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:43:27.077: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the rc1
+STEP: create the rc2
+STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
+STEP: delete the rc simpletest-rc-to-be-deleted
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+Jun 23 21:43:37.238: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 252
+	[quantile=0.9] = 192092
+	[quantile=0.99] = 392322
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 29349
+	[quantile=0.9] = 815869
+	[quantile=0.99] = 999189
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = 11
+	[quantile=0.9] = 11
+	[quantile=0.99] = 11
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = 26624
+	[quantile=0.9] = 26624
+	[quantile=0.99] = 26624
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 9
+	[quantile=0.99] = 38
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 18
+	[quantile=0.9] = 34
+	[quantile=0.99] = 54
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 17
+	[quantile=0.9] = 25
+	[quantile=0.99] = 34
+For namespace_queue_latency_sum:
+	[] = 3708
+For namespace_queue_latency_count:
+	[] = 184
+For namespace_retries:
+	[] = 186
+For namespace_work_duration:
+	[quantile=0.5] = 183027
+	[quantile=0.9] = 228731
+	[quantile=0.99] = 309089
+For namespace_work_duration_sum:
+	[] = 22874330
+For namespace_work_duration_count:
+	[] = 184
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:43:37.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-tvfhw" for this suite.
+Jun 23 21:43:43.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:43:43.318: INFO: namespace: e2e-tests-gc-tvfhw, resource: bindings, ignored listing per whitelist
+Jun 23 21:43:43.335: INFO: namespace e2e-tests-gc-tvfhw deletion completed in 6.093151389s
+
+• [SLOW TEST:16.258 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:43:43.336: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:44:43.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-52lx7" for this suite.
+Jun 23 21:45:05.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:45:05.474: INFO: namespace: e2e-tests-container-probe-52lx7, resource: bindings, ignored listing per whitelist
+Jun 23 21:45:05.517: INFO: namespace e2e-tests-container-probe-52lx7 deletion completed in 22.097441941s
+
+• [SLOW TEST:82.182 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: udp [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:45:05.518: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6xvbv
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun 23 21:45:05.590: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun 23 21:45:27.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.251.128.7:8080/dial?request=hostName&protocol=udp&host=10.251.128.6&port=8081&tries=1'] Namespace:e2e-tests-pod-network-test-6xvbv PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 21:45:27.639: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 21:45:27.877: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:45:27.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pod-network-test-6xvbv" for this suite.
+Jun 23 21:45:49.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:45:49.928: INFO: namespace: e2e-tests-pod-network-test-6xvbv, resource: bindings, ignored listing per whitelist
+Jun 23 21:45:49.974: INFO: namespace e2e-tests-pod-network-test-6xvbv deletion completed in 22.092051554s
+
+• [SLOW TEST:44.456 seconds]
+[sig-network] Networking
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: udp [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-node] ConfigMap 
+  should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:45:49.974: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap e2e-tests-configmap-9p6v8/configmap-test-443c048d-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:45:50.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-9p6v8" to be "success or failure"
+Jun 23 21:45:50.058: INFO: Pod "pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82641ms
+Jun 23 21:45:52.062: INFO: Pod "pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006464506s
+Jun 23 21:45:54.065: INFO: Pod "pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009739569s
+STEP: Saw pod success
+Jun 23 21:45:54.065: INFO: Pod "pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:45:54.068: INFO: Trying to get logs from node minion pod pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32 container env-test: 
+STEP: delete the pod
+Jun 23 21:45:54.088: INFO: Waiting for pod pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:45:54.091: INFO: Pod pod-configmaps-443c96ce-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:45:54.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-9p6v8" for this suite.
+Jun 23 21:46:00.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:46:00.131: INFO: namespace: e2e-tests-configmap-9p6v8, resource: bindings, ignored listing per whitelist
+Jun 23 21:46:00.188: INFO: namespace e2e-tests-configmap-9p6v8 deletion completed in 6.092891967s
+
+• [SLOW TEST:10.214 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
+  should be consumable via environment variable [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:46:00.188: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-map-4a5263d7-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:46:00.268: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-dwvv6" to be "success or failure"
+Jun 23 21:46:00.271: INFO: Pod "pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57344ms
+Jun 23 21:46:02.274: INFO: Pod "pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006114671s
+Jun 23 21:46:04.278: INFO: Pod "pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009741982s
+STEP: Saw pod success
+Jun 23 21:46:04.278: INFO: Pod "pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:46:04.281: INFO: Trying to get logs from node minion pod pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:46:04.302: INFO: Waiting for pod pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:46:04.308: INFO: Pod pod-configmaps-4a52f7cb-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:46:04.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-dwvv6" for this suite.
+Jun 23 21:46:10.321: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:46:10.407: INFO: namespace: e2e-tests-configmap-dwvv6, resource: bindings, ignored listing per whitelist
+Jun 23 21:46:10.407: INFO: namespace e2e-tests-configmap-dwvv6 deletion completed in 6.095451223s
+
+• [SLOW TEST:10.219 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:46:10.407: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 21:46:10.478: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:46:14.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-2xt2l" for this suite.
+Jun 23 21:46:54.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:46:54.598: INFO: namespace: e2e-tests-pods-2xt2l, resource: bindings, ignored listing per whitelist
+Jun 23 21:46:54.611: INFO: namespace e2e-tests-pods-2xt2l deletion completed in 40.091890851s
+
+• [SLOW TEST:44.204 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:46:54.612: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-6ac2edff-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:46:54.693: INFO: Waiting up to 5m0s for pod "pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-gcbhn" to be "success or failure"
+Jun 23 21:46:54.696: INFO: Pod "pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637644ms
+Jun 23 21:46:56.699: INFO: Pod "pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00627845s
+Jun 23 21:46:58.703: INFO: Pod "pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009866653s
+STEP: Saw pod success
+Jun 23 21:46:58.703: INFO: Pod "pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:46:58.706: INFO: Trying to get logs from node minion pod pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:46:58.723: INFO: Waiting for pod pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:46:58.726: INFO: Pod pod-secrets-6ac3870a-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:46:58.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-gcbhn" for this suite.
+Jun 23 21:47:04.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:04.749: INFO: namespace: e2e-tests-secrets-gcbhn, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:04.831: INFO: namespace e2e-tests-secrets-gcbhn deletion completed in 6.101571154s
+
+• [SLOW TEST:10.219 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume with defaultMode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:04.831: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-70da45b2-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:47:04.912: INFO: Waiting up to 5m0s for pod "pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-nk6st" to be "success or failure"
+Jun 23 21:47:04.916: INFO: Pod "pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564564ms
+Jun 23 21:47:06.919: INFO: Pod "pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007166193s
+Jun 23 21:47:08.923: INFO: Pod "pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011092223s
+STEP: Saw pod success
+Jun 23 21:47:08.923: INFO: Pod "pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:47:08.926: INFO: Trying to get logs from node minion pod pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:47:08.944: INFO: Waiting for pod pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:47:08.950: INFO: Pod pod-secrets-70dac2d9-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:47:08.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-nk6st" for this suite.
+Jun 23 21:47:14.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:15.041: INFO: namespace: e2e-tests-secrets-nk6st, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:15.049: INFO: namespace e2e-tests-secrets-nk6st deletion completed in 6.095349679s
+
+• [SLOW TEST:10.218 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:15.049: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-76f19383-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:47:15.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-2rjf2" to be "success or failure"
+Jun 23 21:47:15.135: INFO: Pod "pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.710401ms
+Jun 23 21:47:17.138: INFO: Pod "pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006308351s
+Jun 23 21:47:19.142: INFO: Pod "pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009894868s
+STEP: Saw pod success
+Jun 23 21:47:19.142: INFO: Pod "pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:47:19.145: INFO: Trying to get logs from node minion pod pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:47:19.163: INFO: Waiting for pod pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:47:19.168: INFO: Pod pod-projected-secrets-76f22e33-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:47:19.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-2rjf2" for this suite.
+Jun 23 21:47:25.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:25.217: INFO: namespace: e2e-tests-projected-2rjf2, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:25.262: INFO: namespace e2e-tests-projected-2rjf2 deletion completed in 6.089836042s
+
+• [SLOW TEST:10.213 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:25.262: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating projection with secret that has name projected-secret-test-7d07de1b-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:47:25.344: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-v2vbf" to be "success or failure"
+Jun 23 21:47:25.347: INFO: Pod "pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.826076ms
+Jun 23 21:47:27.350: INFO: Pod "pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006597436s
+Jun 23 21:47:29.354: INFO: Pod "pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009840404s
+STEP: Saw pod success
+Jun 23 21:47:29.354: INFO: Pod "pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:47:29.356: INFO: Trying to get logs from node minion pod pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32 container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:47:29.374: INFO: Waiting for pod pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:47:29.376: INFO: Pod pod-projected-secrets-7d087724-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:47:29.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-v2vbf" for this suite.
+Jun 23 21:47:35.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:35.431: INFO: namespace: e2e-tests-projected-v2vbf, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:35.472: INFO: namespace e2e-tests-projected-v2vbf deletion completed in 6.091908972s
+
+• [SLOW TEST:10.210 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[sig-network] Services 
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:35.472: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename services
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:85
+[It] should provide secure master service  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:47:35.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-services-8cphj" for this suite.
+Jun 23 21:47:41.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:41.627: INFO: namespace: e2e-tests-services-8cphj, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:41.639: INFO: namespace e2e-tests-services-8cphj deletion completed in 6.092044182s
+[AfterEach] [sig-network] Services
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:90
+
+• [SLOW TEST:6.167 seconds]
+[sig-network] Services
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  should provide secure master service  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
+  should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:41.640: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[BeforeEach] when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
+[It] should be possible to delete [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:47:41.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubelet-test-xfhs2" for this suite.
+Jun 23 21:47:47.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:47:47.760: INFO: namespace: e2e-tests-kubelet-test-xfhs2, resource: bindings, ignored listing per whitelist
+Jun 23 21:47:47.825: INFO: namespace e2e-tests-kubelet-test-xfhs2 deletion completed in 6.093893138s
+
+• [SLOW TEST:6.185 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when scheduling a busybox command that always fails in a pod
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
+    should be possible to delete [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[k8s.io] Probing container 
+  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:47:47.825: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod liveness-http in namespace e2e-tests-container-probe-lhz4s
+Jun 23 21:47:51.912: INFO: Started pod liveness-http in namespace e2e-tests-container-probe-lhz4s
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 23 21:47:51.915: INFO: Initial restart count of pod liveness-http is 0
+Jun 23 21:48:09.949: INFO: Restart count of pod e2e-tests-container-probe-lhz4s/liveness-http is now 1 (18.033645943s elapsed)
+Jun 23 21:48:29.989: INFO: Restart count of pod e2e-tests-container-probe-lhz4s/liveness-http is now 2 (38.073806363s elapsed)
+Jun 23 21:48:50.025: INFO: Restart count of pod e2e-tests-container-probe-lhz4s/liveness-http is now 3 (58.109549098s elapsed)
+Jun 23 21:49:10.060: INFO: Restart count of pod e2e-tests-container-probe-lhz4s/liveness-http is now 4 (1m18.14465597s elapsed)
+Jun 23 21:50:20.192: INFO: Restart count of pod e2e-tests-container-probe-lhz4s/liveness-http is now 5 (2m28.277298685s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:50:20.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-lhz4s" for this suite.
+Jun 23 21:50:26.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:50:26.295: INFO: namespace: e2e-tests-container-probe-lhz4s, resource: bindings, ignored listing per whitelist
+Jun 23 21:50:26.299: INFO: namespace e2e-tests-container-probe-lhz4s deletion completed in 6.091706077s
+
+• [SLOW TEST:158.475 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should have monotonically increasing restart count [Slow][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:50:26.300: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
+[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating the pod
+Jun 23 21:50:26.373: INFO: PodSpec: initContainers in spec.initContainers
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:50:31.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-init-container-5w7xb" for this suite.
+Jun 23 21:50:37.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:50:37.205: INFO: namespace: e2e-tests-init-container-5w7xb, resource: bindings, ignored listing per whitelist
+Jun 23 21:50:37.247: INFO: namespace e2e-tests-init-container-5w7xb deletion completed in 6.094389496s
+
+• [SLOW TEST:10.947 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:50:37.248: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir volume type on tmpfs
+Jun 23 21:50:37.324: INFO: Waiting up to 5m0s for pod "pod-ef765593-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-nqtwf" to be "success or failure"
+Jun 23 21:50:37.327: INFO: Pod "pod-ef765593-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.868895ms
+Jun 23 21:50:39.331: INFO: Pod "pod-ef765593-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006515858s
+Jun 23 21:50:41.334: INFO: Pod "pod-ef765593-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009963859s
+STEP: Saw pod success
+Jun 23 21:50:41.334: INFO: Pod "pod-ef765593-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:50:41.337: INFO: Trying to get logs from node minion pod pod-ef765593-9600-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:50:41.355: INFO: Waiting for pod pod-ef765593-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:50:41.357: INFO: Pod pod-ef765593-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:50:41.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-nqtwf" for this suite.
+Jun 23 21:50:47.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:50:47.415: INFO: namespace: e2e-tests-emptydir-nqtwf, resource: bindings, ignored listing per whitelist
+Jun 23 21:50:47.457: INFO: namespace e2e-tests-emptydir-nqtwf deletion completed in 6.095685481s
+
+• [SLOW TEST:10.209 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  volume on tmpfs should have the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:50:47.457: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-map-f58ca315-9600-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 21:50:47.540: INFO: Waiting up to 5m0s for pod "pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-lc6cc" to be "success or failure"
+Jun 23 21:50:47.543: INFO: Pod "pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.07079ms
+Jun 23 21:50:49.547: INFO: Pod "pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006619431s
+Jun 23 21:50:51.550: INFO: Pod "pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010140414s
+STEP: Saw pod success
+Jun 23 21:50:51.550: INFO: Pod "pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:50:51.553: INFO: Trying to get logs from node minion pod pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 21:50:51.570: INFO: Waiting for pod pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:50:51.576: INFO: Pod pod-secrets-f58d2dea-9600-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:50:51.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-lc6cc" for this suite.
+Jun 23 21:50:57.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:50:57.659: INFO: namespace: e2e-tests-secrets-lc6cc, resource: bindings, ignored listing per whitelist
+Jun 23 21:50:57.675: INFO: namespace e2e-tests-secrets-lc6cc deletion completed in 6.09421615s
+
+• [SLOW TEST:10.218 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume with mappings and Item Mode set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:50:57.675: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating a watch on configmaps with a certain label
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: changing the label value of the configmap
+STEP: Expecting to observe a delete notification for the watched object
+Jun 23 21:50:57.763: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8482,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun 23 21:50:57.763: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8483,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+Jun 23 21:50:57.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8484,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time
+STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
+STEP: changing the label value of the configmap back
+STEP: modifying the configmap a third time
+STEP: deleting the configmap
+STEP: Expecting to observe an add notification for the watched object when the label value was restored
+Jun 23 21:51:07.787: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8499,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun 23 21:51:07.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8500,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+Jun 23 21:51:07.787: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:e2e-tests-watch-jgfrk,SelfLink:/api/v1/namespaces/e2e-tests-watch-jgfrk/configmaps/e2e-watch-test-label-changed,UID:fba48a4a-9600-11e9-8956-98039b22fc2c,ResourceVersion:8501,Generation:0,CreationTimestamp:2019-06-23 21:50:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:51:07.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-watch-jgfrk" for this suite.
+Jun 23 21:51:13.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:51:13.839: INFO: namespace: e2e-tests-watch-jgfrk, resource: bindings, ignored listing per whitelist
+Jun 23 21:51:13.885: INFO: namespace e2e-tests-watch-jgfrk deletion completed in 6.093306992s
+
+• [SLOW TEST:16.210 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[k8s.io] Probing container 
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:51:13.885: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-r6sdm
+Jun 23 21:51:17.975: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-r6sdm
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 23 21:51:17.978: INFO: Initial restart count of pod liveness-exec is 0
+Jun 23 21:52:12.079: INFO: Restart count of pod e2e-tests-container-probe-r6sdm/liveness-exec is now 1 (54.100790747s elapsed)
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:52:12.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-r6sdm" for this suite.
+Jun 23 21:52:18.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:52:18.157: INFO: namespace: e2e-tests-container-probe-r6sdm, resource: bindings, ignored listing per whitelist
+Jun 23 21:52:18.184: INFO: namespace e2e-tests-container-probe-r6sdm deletion completed in 6.092794341s
+
+• [SLOW TEST:64.299 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should perform rolling updates and roll backs of template modifications [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:52:18.184: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace e2e-tests-statefulset-qmsz5
+[It] should perform rolling updates and roll backs of template modifications [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a new StatefulSet
+Jun 23 21:52:18.267: INFO: Found 0 stateful pods, waiting for 3
+Jun 23 21:52:28.271: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 21:52:28.271: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 21:52:28.271: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 21:52:28.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-qmsz5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 21:52:28.656: INFO: stderr: ""
+Jun 23 21:52:28.656: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 21:52:28.656: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
+Jun 23 21:52:38.698: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Updating Pods in reverse ordinal order
+Jun 23 21:52:48.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-qmsz5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 21:52:49.082: INFO: stderr: ""
+Jun 23 21:52:49.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 23 21:52:49.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 23 21:52:59.102: INFO: Waiting for StatefulSet e2e-tests-statefulset-qmsz5/ss2 to complete update
+Jun 23 21:52:59.102: INFO: Waiting for Pod e2e-tests-statefulset-qmsz5/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
+Jun 23 21:52:59.102: INFO: Waiting for Pod e2e-tests-statefulset-qmsz5/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666
+Jun 23 21:53:09.110: INFO: Waiting for StatefulSet e2e-tests-statefulset-qmsz5/ss2 to complete update
+Jun 23 21:53:09.110: INFO: Waiting for Pod e2e-tests-statefulset-qmsz5/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
+STEP: Rolling back to a previous revision
+Jun 23 21:53:19.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-qmsz5 ss2-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 21:53:19.471: INFO: stderr: ""
+Jun 23 21:53:19.471: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 21:53:19.471: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 23 21:53:19.495: INFO: Updating stateful set ss2
+STEP: Rolling back update in reverse ordinal order
+Jun 23 21:53:29.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-qmsz5 ss2-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 21:53:29.876: INFO: stderr: ""
+Jun 23 21:53:29.876: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 23 21:53:29.876: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 23 21:53:49.896: INFO: Waiting for StatefulSet e2e-tests-statefulset-qmsz5/ss2 to complete update
+Jun 23 21:53:49.896: INFO: Waiting for Pod e2e-tests-statefulset-qmsz5/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 23 21:53:59.903: INFO: Deleting all statefulset in ns e2e-tests-statefulset-qmsz5
+Jun 23 21:53:59.906: INFO: Scaling statefulset ss2 to 0
+Jun 23 21:54:09.920: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 21:54:09.923: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:54:09.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-statefulset-qmsz5" for this suite.
+Jun 23 21:54:15.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:54:16.034: INFO: namespace: e2e-tests-statefulset-qmsz5, resource: bindings, ignored listing per whitelist
+Jun 23 21:54:16.043: INFO: namespace e2e-tests-statefulset-qmsz5 deletion completed in 6.104717466s
+
+• [SLOW TEST:117.859 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should perform rolling updates and roll backs of template modifications [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:54:16.044: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jun 23 21:54:16.122: INFO: Waiting up to 5m0s for pod "pod-71e02b74-9601-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-kcknp" to be "success or failure"
+Jun 23 21:54:16.125: INFO: Pod "pod-71e02b74-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.774259ms
+Jun 23 21:54:18.128: INFO: Pod "pod-71e02b74-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006224096s
+Jun 23 21:54:20.132: INFO: Pod "pod-71e02b74-9601-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00954629s
+STEP: Saw pod success
+Jun 23 21:54:20.132: INFO: Pod "pod-71e02b74-9601-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:54:20.135: INFO: Trying to get logs from node minion pod pod-71e02b74-9601-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 21:54:20.154: INFO: Waiting for pod pod-71e02b74-9601-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:54:20.160: INFO: Pod pod-71e02b74-9601-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:54:20.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-kcknp" for this suite.
+Jun 23 21:54:26.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:54:26.234: INFO: namespace: e2e-tests-emptydir-kcknp, resource: bindings, ignored listing per whitelist
+Jun 23 21:54:26.254: INFO: namespace e2e-tests-emptydir-kcknp deletion completed in 6.09111619s
+
+• [SLOW TEST:10.211 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0666,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:54:26.255: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-volume-77f5cb5a-9601-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 21:54:26.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-zmw55" to be "success or failure"
+Jun 23 21:54:26.337: INFO: Pod "pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740278ms
+Jun 23 21:54:28.340: INFO: Pod "pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006228876s
+Jun 23 21:54:30.343: INFO: Pod "pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009689404s
+STEP: Saw pod success
+Jun 23 21:54:30.344: INFO: Pod "pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:54:30.346: INFO: Trying to get logs from node minion pod pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32 container configmap-volume-test: 
+STEP: delete the pod
+Jun 23 21:54:30.363: INFO: Waiting for pod pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:54:30.365: INFO: Pod pod-configmaps-77f66dc2-9601-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:54:30.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-zmw55" for this suite.
+Jun 23 21:54:36.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:54:36.443: INFO: namespace: e2e-tests-configmap-zmw55, resource: bindings, ignored listing per whitelist
+Jun 23 21:54:36.463: INFO: namespace e2e-tests-configmap-zmw55 deletion completed in 6.093742697s
+
+• [SLOW TEST:10.208 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir wrapper volumes 
+  should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:54:36.463: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir-wrapper
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Cleaning up the secret
+STEP: Cleaning up the configmap
+STEP: Cleaning up the pod
+[AfterEach] [sig-storage] EmptyDir wrapper volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:54:40.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-wrapper-kmc45" for this suite.
+Jun 23 21:54:46.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:54:46.617: INFO: namespace: e2e-tests-emptydir-wrapper-kmc45, resource: bindings, ignored listing per whitelist
+Jun 23 21:54:46.678: INFO: namespace e2e-tests-emptydir-wrapper-kmc45 deletion completed in 6.09238112s
+
+• [SLOW TEST:10.215 seconds]
+[sig-storage] EmptyDir wrapper volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  should not conflict [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:54:46.678: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace e2e-tests-statefulset-rv6zb
+[It] Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Looking for a node to schedule stateful set and pod
+STEP: Creating pod with conflicting port in namespace e2e-tests-statefulset-rv6zb
+STEP: Creating statefulset with conflicting port in namespace e2e-tests-statefulset-rv6zb
+STEP: Waiting until pod test-pod will start running in namespace e2e-tests-statefulset-rv6zb
+STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace e2e-tests-statefulset-rv6zb
+Jun 23 21:54:50.786: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rv6zb, name: ss-0, uid: 85d55f0a-9601-11e9-8956-98039b22fc2c, status phase: Pending. Waiting for statefulset controller to delete.
+Jun 23 21:54:50.984: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rv6zb, name: ss-0, uid: 85d55f0a-9601-11e9-8956-98039b22fc2c, status phase: Failed. Waiting for statefulset controller to delete.
+Jun 23 21:54:50.990: INFO: Observed stateful pod in namespace: e2e-tests-statefulset-rv6zb, name: ss-0, uid: 85d55f0a-9601-11e9-8956-98039b22fc2c, status phase: Failed. Waiting for statefulset controller to delete.
+Jun 23 21:54:50.994: INFO: Observed delete event for stateful pod ss-0 in namespace e2e-tests-statefulset-rv6zb
+STEP: Removing pod with conflicting port in namespace e2e-tests-statefulset-rv6zb
+STEP: Waiting when stateful pod ss-0 will be recreated in namespace e2e-tests-statefulset-rv6zb and will be in running state
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 23 21:54:55.017: INFO: Deleting all statefulset in ns e2e-tests-statefulset-rv6zb
+Jun 23 21:54:55.020: INFO: Scaling statefulset ss to 0
+Jun 23 21:55:05.034: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 21:55:05.037: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:55:05.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-statefulset-rv6zb" for this suite.
+Jun 23 21:55:11.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:55:11.105: INFO: namespace: e2e-tests-statefulset-rv6zb, resource: bindings, ignored listing per whitelist
+Jun 23 21:55:11.145: INFO: namespace e2e-tests-statefulset-rv6zb deletion completed in 6.093147248s
+
+• [SLOW TEST:24.468 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    Should recreate evicted statefulset [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:55:11.146: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 21:55:11.231: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-vj4t8" to be "success or failure"
+Jun 23 21:55:11.233: INFO: Pod "downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693572ms
+Jun 23 21:55:13.237: INFO: Pod "downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006334166s
+Jun 23 21:55:15.241: INFO: Pod "downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009838017s
+STEP: Saw pod success
+Jun 23 21:55:15.241: INFO: Pod "downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 21:55:15.243: INFO: Trying to get logs from node minion pod downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 21:55:15.264: INFO: Waiting for pod downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32 to disappear
+Jun 23 21:55:15.268: INFO: Pod downwardapi-volume-92b90b06-9601-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:55:15.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-vj4t8" for this suite.
+Jun 23 21:55:21.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:55:21.298: INFO: namespace: e2e-tests-projected-vj4t8, resource: bindings, ignored listing per whitelist
+Jun 23 21:55:21.363: INFO: namespace e2e-tests-projected-vj4t8 deletion completed in 6.091238014s
+
+• [SLOW TEST:10.217 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl expose 
+  should create services for rc  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:55:21.363: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should create services for rc  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating Redis RC
+Jun 23 21:55:21.436: INFO: namespace e2e-tests-kubectl-cbmbk
+Jun 23 21:55:21.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-cbmbk'
+Jun 23 21:55:22.188: INFO: stderr: ""
+Jun 23 21:55:22.188: INFO: stdout: "replicationcontroller/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun 23 21:55:23.192: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 21:55:23.192: INFO: Found 0 / 1
+Jun 23 21:55:24.192: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 21:55:24.192: INFO: Found 0 / 1
+Jun 23 21:55:25.192: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 21:55:25.192: INFO: Found 1 / 1
+Jun 23 21:55:25.192: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jun 23 21:55:25.196: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 21:55:25.196: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun 23 21:55:25.196: INFO: wait on redis-master startup in e2e-tests-kubectl-cbmbk 
+Jun 23 21:55:25.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 logs redis-master-zptdh redis-master --namespace=e2e-tests-kubectl-cbmbk'
+Jun 23 21:55:25.345: INFO: stderr: ""
+Jun 23 21:55:25.345: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 23 Jun 21:55:23.902 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jun 21:55:23.902 # Server started, Redis version 3.2.12\n1:M 23 Jun 21:55:23.902 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jun 21:55:23.902 * The server is now ready to accept connections on port 6379\n"
+STEP: exposing RC
+Jun 23 21:55:25.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=e2e-tests-kubectl-cbmbk'
+Jun 23 21:55:25.510: INFO: stderr: ""
+Jun 23 21:55:25.510: INFO: stdout: "service/rm2 exposed\n"
+Jun 23 21:55:25.513: INFO: Service rm2 in namespace e2e-tests-kubectl-cbmbk found.
+STEP: exposing service
+Jun 23 21:55:27.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=e2e-tests-kubectl-cbmbk'
+Jun 23 21:55:27.678: INFO: stderr: ""
+Jun 23 21:55:27.678: INFO: stdout: "service/rm3 exposed\n"
+Jun 23 21:55:27.680: INFO: Service rm3 in namespace e2e-tests-kubectl-cbmbk found.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:55:29.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-cbmbk" for this suite.
+Jun 23 21:55:51.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:55:51.760: INFO: namespace: e2e-tests-kubectl-cbmbk, resource: bindings, ignored listing per whitelist
+Jun 23 21:55:51.782: INFO: namespace e2e-tests-kubectl-cbmbk deletion completed in 22.092434013s
+
+• [SLOW TEST:30.419 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl expose
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create services for rc  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:55:51.783: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the pod with lifecycle hook
+STEP: check poststart hook
+STEP: delete the pod with lifecycle hook
+Jun 23 21:58:40.905: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:40.908: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:42.908: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:42.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:44.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:44.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:46.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:46.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:48.908: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:48.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:50.908: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:50.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:52.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:52.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:54.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:54.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:56.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:56.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:58:58.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:58:58.912: INFO: Pod pod-with-poststart-exec-hook still exists
+Jun 23 21:59:00.909: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
+Jun 23 21:59:00.912: INFO: Pod pod-with-poststart-exec-hook no longer exists
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 21:59:00.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-pbwtp" for this suite.
+Jun 23 21:59:22.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 21:59:23.015: INFO: namespace: e2e-tests-container-lifecycle-hook-pbwtp, resource: bindings, ignored listing per whitelist
+Jun 23 21:59:23.015: INFO: namespace e2e-tests-container-lifecycle-hook-pbwtp deletion completed in 22.098439105s
+
+• [SLOW TEST:211.232 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute poststart exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 21:59:23.015: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace e2e-tests-statefulset-87s2x
+[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a new StaefulSet
+Jun 23 21:59:23.098: INFO: Found 0 stateful pods, waiting for 3
+Jun 23 21:59:33.102: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 21:59:33.102: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 21:59:33.102: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
+Jun 23 21:59:33.129: INFO: Updating stateful set ss2
+STEP: Creating a new revision
+STEP: Not applying an update when the partition is greater than the number of replicas
+STEP: Performing a canary update
+Jun 23 21:59:43.160: INFO: Updating stateful set ss2
+Jun 23 21:59:43.167: INFO: Waiting for Pod e2e-tests-statefulset-87s2x/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666
+STEP: Restoring Pods to the correct revision when they are deleted
+Jun 23 21:59:53.197: INFO: Found 1 stateful pods, waiting for 3
+Jun 23 22:00:03.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 22:00:03.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 22:00:03.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Performing a phased rolling update
+Jun 23 22:00:03.226: INFO: Updating stateful set ss2
+Jun 23 22:00:03.232: INFO: Waiting for Pod e2e-tests-statefulset-87s2x/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666
+Jun 23 22:00:13.240: INFO: Waiting for Pod e2e-tests-statefulset-87s2x/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666
+Jun 23 22:00:23.257: INFO: Updating stateful set ss2
+Jun 23 22:00:23.264: INFO: Waiting for StatefulSet e2e-tests-statefulset-87s2x/ss2 to complete update
+Jun 23 22:00:23.264: INFO: Waiting for Pod e2e-tests-statefulset-87s2x/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
+Jun 23 22:00:33.271: INFO: Waiting for StatefulSet e2e-tests-statefulset-87s2x/ss2 to complete update
+Jun 23 22:00:33.272: INFO: Waiting for Pod e2e-tests-statefulset-87s2x/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 23 22:00:43.272: INFO: Deleting all statefulset in ns e2e-tests-statefulset-87s2x
+Jun 23 22:00:43.275: INFO: Scaling statefulset ss2 to 0
+Jun 23 22:00:53.289: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 22:00:53.292: INFO: Deleting statefulset ss2
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:00:53.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-statefulset-87s2x" for this suite.
+Jun 23 22:00:59.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:00:59.347: INFO: namespace: e2e-tests-statefulset-87s2x, resource: bindings, ignored listing per whitelist
+Jun 23 22:00:59.409: INFO: namespace e2e-tests-statefulset-87s2x deletion completed in 6.100578125s
+
+• [SLOW TEST:96.394 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should perform canary updates and phased rolling updates of template modifications [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:00:59.409: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-624d301a-9602-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 22:00:59.497: INFO: Waiting up to 5m0s for pod "pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-7gcgv" to be "success or failure"
+Jun 23 22:00:59.500: INFO: Pod "pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.97673ms
+Jun 23 22:01:01.504: INFO: Pod "pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006731395s
+Jun 23 22:01:03.508: INFO: Pod "pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010494678s
+STEP: Saw pod success
+Jun 23 22:01:03.508: INFO: Pod "pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:01:03.511: INFO: Trying to get logs from node minion pod pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32 container secret-env-test: 
+STEP: delete the pod
+Jun 23 22:01:03.530: INFO: Waiting for pod pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:01:03.533: INFO: Pod pod-secrets-624dd1ce-9602-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:01:03.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-7gcgv" for this suite.
+Jun 23 22:01:09.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:01:09.579: INFO: namespace: e2e-tests-secrets-7gcgv, resource: bindings, ignored listing per whitelist
+Jun 23 22:01:09.634: INFO: namespace e2e-tests-secrets-7gcgv deletion completed in 6.096605405s
+
+• [SLOW TEST:10.224 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:01:09.634: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 22:01:09.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-5j8kk" to be "success or failure"
+Jun 23 22:01:09.715: INFO: Pod "downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.703345ms
+Jun 23 22:01:11.718: INFO: Pod "downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006261136s
+Jun 23 22:01:13.725: INFO: Pod "downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012873898s
+STEP: Saw pod success
+Jun 23 22:01:13.725: INFO: Pod "downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:01:13.728: INFO: Trying to get logs from node minion pod downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 22:01:13.746: INFO: Waiting for pod downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:01:13.751: INFO: Pod downwardapi-volume-6864f325-9602-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:01:13.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-5j8kk" for this suite.
+Jun 23 22:01:19.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:01:19.830: INFO: namespace: e2e-tests-downward-api-5j8kk, resource: bindings, ignored listing per whitelist
+Jun 23 22:01:19.846: INFO: namespace e2e-tests-downward-api-5j8kk deletion completed in 6.091616043s
+
+• [SLOW TEST:10.213 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: udp [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:01:19.847: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: udp [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-bzr4t
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun 23 22:01:19.917: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun 23 22:01:39.962: INFO: ExecWithOptions {Command:[/bin/sh -c echo 'hostName' | nc -w 1 -u 10.251.128.6 8081 | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-bzr4t PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:01:39.962: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:01:41.176: INFO: Found all expected endpoints: [netserver-0]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:01:41.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pod-network-test-bzr4t" for this suite.
+Jun 23 22:02:03.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:02:03.259: INFO: namespace: e2e-tests-pod-network-test-bzr4t, resource: bindings, ignored listing per whitelist
+Jun 23 22:02:03.272: INFO: namespace e2e-tests-pod-network-test-bzr4t deletion completed in 22.091738074s
+
+• [SLOW TEST:43.425 seconds]
+[sig-network] Networking
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: udp [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:02:03.272: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 22:02:03.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-wsxfx" to be "success or failure"
+Jun 23 22:02:03.352: INFO: Pod "downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.612557ms
+Jun 23 22:02:05.355: INFO: Pod "downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005897176s
+Jun 23 22:02:07.358: INFO: Pod "downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009294838s
+STEP: Saw pod success
+Jun 23 22:02:07.359: INFO: Pod "downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:02:07.361: INFO: Trying to get logs from node minion pod downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 22:02:07.378: INFO: Waiting for pod downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:02:07.381: INFO: Pod downwardapi-volume-885d5f55-9602-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:02:07.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-wsxfx" for this suite.
+Jun 23 22:02:13.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:02:13.473: INFO: namespace: e2e-tests-downward-api-wsxfx, resource: bindings, ignored listing per whitelist
+Jun 23 22:02:13.477: INFO: namespace e2e-tests-downward-api-wsxfx deletion completed in 6.092143817s
+
+• [SLOW TEST:10.205 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:02:13.477: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 22:02:13.554: INFO: Pod name rollover-pod: Found 0 pods out of 1
+Jun 23 22:02:18.558: INFO: Pod name rollover-pod: Found 1 pods out of 1
+STEP: ensuring each pod is running
+Jun 23 22:02:18.558: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
+Jun 23 22:02:20.561: INFO: Creating deployment "test-rollover-deployment"
+Jun 23 22:02:20.568: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
+Jun 23 22:02:22.574: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
+Jun 23 22:02:22.580: INFO: Ensure that both replica sets have 1 created replica
+Jun 23 22:02:22.586: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
+Jun 23 22:02:22.593: INFO: Updating deployment test-rollover-deployment
+Jun 23 22:02:22.593: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
+Jun 23 22:02:24.599: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
+Jun 23 22:02:24.605: INFO: Make sure deployment "test-rollover-deployment" is complete
+Jun 23 22:02:24.611: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:24.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924142, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:26.618: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:26.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924145, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:28.619: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:28.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924145, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:30.618: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:30.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924145, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:32.618: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:32.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924145, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:34.623: INFO: all replica sets need to contain the pod-template-hash label
+Jun 23 22:02:34.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924145, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696924140, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-6b7f9d6597\" is progressing."}}, CollisionCount:(*int32)(nil)}
+Jun 23 22:02:36.618: INFO: 
+Jun 23 22:02:36.618: INFO: Ensure that both old replica sets have no replicas
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Jun 23 22:02:36.627: INFO: Deployment "test-rollover-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:e2e-tests-deployment-7rbzv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7rbzv/deployments/test-rollover-deployment,UID:92a1798d-9602-11e9-8956-98039b22fc2c,ResourceVersion:10553,Generation:2,CreationTimestamp:2019-06-23 22:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-23 22:02:20 +0000 UTC 2019-06-23 22:02:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-23 22:02:35 +0000 UTC 2019-06-23 22:02:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-6b7f9d6597" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}
+
+Jun 23 22:02:36.631: INFO: New ReplicaSet "test-rollover-deployment-6b7f9d6597" of Deployment "test-rollover-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597,GenerateName:,Namespace:e2e-tests-deployment-7rbzv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7rbzv/replicasets/test-rollover-deployment-6b7f9d6597,UID:93d7749c-9602-11e9-8956-98039b22fc2c,ResourceVersion:10544,Generation:2,CreationTimestamp:2019-06-23 22:02:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 92a1798d-9602-11e9-8956-98039b22fc2c 0xc002212167 0xc002212168}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
+Jun 23 22:02:36.631: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
+Jun 23 22:02:36.632: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:e2e-tests-deployment-7rbzv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7rbzv/replicasets/test-rollover-controller,UID:8e734a65-9602-11e9-8956-98039b22fc2c,ResourceVersion:10552,Generation:2,CreationTimestamp:2019-06-23 22:02:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 92a1798d-9602-11e9-8956-98039b22fc2c 0xc0028217b7 0xc0028217b8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 23 22:02:36.632: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6586df867b,GenerateName:,Namespace:e2e-tests-deployment-7rbzv,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-7rbzv/replicasets/test-rollover-deployment-6586df867b,UID:92a3e608-9602-11e9-8956-98039b22fc2c,ResourceVersion:10514,Generation:2,CreationTimestamp:2019-06-23 22:02:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 92a1798d-9602-11e9-8956-98039b22fc2c 0xc002212097 0xc002212098}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6586df867b,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 23 22:02:36.636: INFO: Pod "test-rollover-deployment-6b7f9d6597-7vkc4" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6b7f9d6597-7vkc4,GenerateName:test-rollover-deployment-6b7f9d6597-,Namespace:e2e-tests-deployment-7rbzv,SelfLink:/api/v1/namespaces/e2e-tests-deployment-7rbzv/pods/test-rollover-deployment-6b7f9d6597-7vkc4,UID:93da991d-9602-11e9-8956-98039b22fc2c,ResourceVersion:10528,Generation:0,CreationTimestamp:2019-06-23 22:02:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6b7f9d6597,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-6b7f9d6597 93d7749c-9602-11e9-8956-98039b22fc2c 0xc002212c97 0xc002212c98}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-sxwpq {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-sxwpq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-sxwpq true /var/run/secrets/kubernetes.io/serviceaccount  }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002212d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002212d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:02:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:02:25 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:02:25 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:02:22 +0000 UTC  }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.7,StartTime:2019-06-23 22:02:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-23 22:02:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://2ed41c61ec94bc6d689b57da602a3ec393ddf84e237af556b725bfd72ecef59e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:02:36.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-deployment-7rbzv" for this suite.
+Jun 23 22:02:42.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:02:42.707: INFO: namespace: e2e-tests-deployment-7rbzv, resource: bindings, ignored listing per whitelist
+Jun 23 22:02:42.732: INFO: namespace e2e-tests-deployment-7rbzv deletion completed in 6.09246576s
+
+• [SLOW TEST:29.254 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  deployment should support rollover [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
+  should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:02:42.732: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should create a job from an image, then delete the job  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: executing a command with run --rm and attach with stdin
+Jun 23 22:02:42.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 --namespace=e2e-tests-kubectl-jh5mx run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
+Jun 23 22:02:45.436: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
+Jun 23 22:02:45.436: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
+STEP: verifying the job e2e-test-rm-busybox-job was deleted
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:02:47.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-jh5mx" for this suite.
+Jun 23 22:02:55.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:02:55.507: INFO: namespace: e2e-tests-kubectl-jh5mx, resource: bindings, ignored listing per whitelist
+Jun 23 22:02:55.544: INFO: namespace e2e-tests-kubectl-jh5mx deletion completed in 8.097941704s
+
+• [SLOW TEST:12.812 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run --rm job
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create a job from an image, then delete the job  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:02:55.545: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward api env vars
+Jun 23 22:02:55.622: INFO: Waiting up to 5m0s for pod "downward-api-a7859fad-9602-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-crp5v" to be "success or failure"
+Jun 23 22:02:55.625: INFO: Pod "downward-api-a7859fad-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726605ms
+Jun 23 22:02:57.629: INFO: Pod "downward-api-a7859fad-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006336165s
+Jun 23 22:02:59.632: INFO: Pod "downward-api-a7859fad-9602-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010047736s
+STEP: Saw pod success
+Jun 23 22:02:59.633: INFO: Pod "downward-api-a7859fad-9602-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:02:59.635: INFO: Trying to get logs from node minion pod downward-api-a7859fad-9602-11e9-9086-ba438756bc32 container dapi-container: 
+STEP: delete the pod
+Jun 23 22:02:59.654: INFO: Waiting for pod downward-api-a7859fad-9602-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:02:59.659: INFO: Pod downward-api-a7859fad-9602-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:02:59.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-crp5v" for this suite.
+Jun 23 22:03:05.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:03:05.701: INFO: namespace: e2e-tests-downward-api-crp5v, resource: bindings, ignored listing per whitelist
+Jun 23 22:03:05.758: INFO: namespace e2e-tests-downward-api-crp5v deletion completed in 6.094742489s
+
+• [SLOW TEST:10.213 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:03:05.758: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop http hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jun 23 22:03:13.862: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun 23 22:03:13.866: INFO: Pod pod-with-prestop-http-hook still exists
+Jun 23 22:03:15.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun 23 22:03:15.870: INFO: Pod pod-with-prestop-http-hook still exists
+Jun 23 22:03:17.866: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
+Jun 23 22:03:17.870: INFO: Pod pod-with-prestop-http-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:03:17.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-756rc" for this suite.
+Jun 23 22:03:39.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:03:39.929: INFO: namespace: e2e-tests-container-lifecycle-hook-756rc, resource: bindings, ignored listing per whitelist
+Jun 23 22:03:39.975: INFO: namespace e2e-tests-container-lifecycle-hook-756rc deletion completed in 22.092060135s
+
+• [SLOW TEST:34.218 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute prestop http hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl describe 
+  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:03:39.976: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 22:03:40.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 version --client'
+Jun 23 22:03:40.127: INFO: stderr: ""
+Jun 23 22:03:40.127: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"13\", GitVersion:\"v1.13.0\", GitCommit:\"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d\", GitTreeState:\"clean\", BuildDate:\"2018-12-03T21:04:45Z\", GoVersion:\"go1.11.2\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+Jun 23 22:03:40.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-jwm99'
+Jun 23 22:03:40.385: INFO: stderr: ""
+Jun 23 22:03:40.385: INFO: stdout: "replicationcontroller/redis-master created\n"
+Jun 23 22:03:40.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-jwm99'
+Jun 23 22:03:40.626: INFO: stderr: ""
+Jun 23 22:03:40.626: INFO: stdout: "service/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun 23 22:03:41.630: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 22:03:41.630: INFO: Found 0 / 1
+Jun 23 22:03:42.630: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 22:03:42.630: INFO: Found 0 / 1
+Jun 23 22:03:43.630: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 22:03:43.630: INFO: Found 1 / 1
+Jun 23 22:03:43.630: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jun 23 22:03:43.634: INFO: Selector matched 1 pods for map[app:redis]
+Jun 23 22:03:43.634: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun 23 22:03:43.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 describe pod redis-master-lqq6r --namespace=e2e-tests-kubectl-jwm99'
+Jun 23 22:03:43.804: INFO: stderr: ""
+Jun 23 22:03:43.804: INFO: stdout: "Name:               redis-master-lqq6r\nNamespace:          e2e-tests-kubectl-jwm99\nPriority:           0\nPriorityClassName:  \nNode:               minion/10.197.149.12\nStart Time:         Sun, 23 Jun 2019 22:03:40 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.251.128.6\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://45b696478be7c5064ed9d796fc8f757e5d78786aa59e89a989680687cd0a6e29\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 23 Jun 2019 22:03:42 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hzqcj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-hzqcj:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-hzqcj\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  3s    default-scheduler  Successfully assigned e2e-tests-kubectl-jwm99/redis-master-lqq6r to minion\n  Normal  Pulled     2s    kubelet, minion    Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, minion    Created container\n  Normal  Started    1s    kubelet, minion    Started container\n"
+Jun 23 22:03:43.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 describe rc redis-master --namespace=e2e-tests-kubectl-jwm99'
+Jun 23 22:03:43.958: INFO: stderr: ""
+Jun 23 22:03:43.958: INFO: stdout: "Name:         redis-master\nNamespace:    e2e-tests-kubectl-jwm99\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: redis-master-lqq6r\n"
+Jun 23 22:03:43.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 describe service redis-master --namespace=e2e-tests-kubectl-jwm99'
+Jun 23 22:03:44.122: INFO: stderr: ""
+Jun 23 22:03:44.122: INFO: stdout: "Name:              redis-master\nNamespace:         e2e-tests-kubectl-jwm99\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.241.70.2\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.251.128.6:6379\nSession Affinity:  None\nEvents:            \n"
+Jun 23 22:03:44.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 describe node master'
+Jun 23 22:03:44.309: INFO: stderr: ""
+Jun 23 22:03:44.309: INFO: stdout: "Name:               master\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/hostname=master\n                    node-role.kubernetes.io/master=\n                    zone=master\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 23 Jun 2019 20:58:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 23 Jun 2019 21:00:34 +0000   Sun, 23 Jun 2019 21:00:34 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 23 Jun 2019 22:03:42 +0000   Sun, 23 Jun 2019 20:58:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 23 Jun 2019 22:03:42 +0000   Sun, 23 Jun 2019 20:58:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 23 Jun 2019 22:03:42 +0000   Sun, 23 Jun 2019 20:58:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 23 Jun 2019 22:03:42 +0000   Sun, 23 Jun 2019 21:00:16 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.197.149.11\n  Hostname:    master\nCapacity:\n cpu:                40\n ephemeral-storage:  459118744Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             196699476Ki\n pods:               110\nAllocatable:\n cpu:                39800m\n ephemeral-storage:  423123833770\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             196097076Ki\n pods:               110\nSystem Info:\n Machine ID:                 afe9d9980c0142949c23aef8fc1ea0a0\n System UUID:                53BF926C-7EA7-03E2-B211-D21DE0FB011A\n Boot ID:                    c13e9489-b532-4ae4-9311-8bec19567184\n Kernel Version:             4.15.0-50-generic\n OS Image:                   Ubuntu 16.04.6 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.2\n Kubelet Version:            v1.13.5\n Kube-Proxy Version:         v1.13.5\nPodCIDR:                     10.251.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                       ------------  ----------  ---------------  -------------  ---\n  heptio-sonobuoy            sonobuoy-systemd-logs-daemon-set-ad4137666e344d9a-qj2vq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51m\n  kube-system                coredns-f9d858bbd-drd2x                                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     62m\n  kube-system                dns-autoscaler-7d85c6f945-gm62s                            20m (0%)      0 (0%)      10Mi (0%)        0 (0%)         62m\n  kube-system                kube-apiserver-master                                      250m (0%)     0 (0%)      0 (0%)           0 (0%)         64m\n  kube-system                kube-controller-manager-master                             200m (0%)     0 (0%)      0 (0%)           0 (0%)         64m\n  kube-system                kube-proxy-5dfg8                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         63m\n  kube-system                kube-scheduler-master                                      100m (0%)     0 (0%)      0 (0%)           0 (0%)         64m\n  kube-system                nodelocaldns-vvjgv                                         100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     62m\n  kube-system                weave-net-rfnlv                                            20m (0%)      0 (0%)      0 (0%)           0 (0%)         63m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                790m (1%)   0 (0%)\n  memory             150Mi (0%)  340Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
+Jun 23 22:03:44.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 describe namespace e2e-tests-kubectl-jwm99'
+Jun 23 22:03:44.441: INFO: stderr: ""
+Jun 23 22:03:44.441: INFO: stdout: "Name:         e2e-tests-kubectl-jwm99\nLabels:       e2e-framework=kubectl\n              e2e-run=96609930-95fb-11e9-9086-ba438756bc32\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:03:44.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-jwm99" for this suite.
+Jun 23 22:04:06.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:04:06.523: INFO: namespace: e2e-tests-kubectl-jwm99, resource: bindings, ignored listing per whitelist
+Jun 23 22:04:06.538: INFO: namespace e2e-tests-kubectl-jwm99 deletion completed in 22.093292888s
+
+• [SLOW TEST:26.563 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl describe
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] HostPath 
+  should give a volume the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] HostPath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:04:06.539: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename hostpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] HostPath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
+[It] should give a volume the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test hostPath mode
+Jun 23 22:04:06.615: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "e2e-tests-hostpath-scnt7" to be "success or failure"
+Jun 23 22:04:06.618: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98161ms
+Jun 23 22:04:08.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006573965s
+Jun 23 22:04:10.625: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010277049s
+STEP: Saw pod success
+Jun 23 22:04:10.625: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
+Jun 23 22:04:10.628: INFO: Trying to get logs from node minion pod pod-host-path-test container test-container-1: 
+STEP: delete the pod
+Jun 23 22:04:10.645: INFO: Waiting for pod pod-host-path-test to disappear
+Jun 23 22:04:10.651: INFO: Pod pod-host-path-test no longer exists
+[AfterEach] [sig-storage] HostPath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:04:10.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-hostpath-scnt7" for this suite.
+Jun 23 22:04:16.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:04:16.723: INFO: namespace: e2e-tests-hostpath-scnt7, resource: bindings, ignored listing per whitelist
+Jun 23 22:04:16.746: INFO: namespace e2e-tests-hostpath-scnt7 deletion completed in 6.091235733s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] HostPath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
+  should give a volume the correct mode [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-node] Downward API 
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:04:16.746: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward api env vars
+Jun 23 22:04:16.832: INFO: Waiting up to 5m0s for pod "downward-api-d7ed4601-9602-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-w628f" to be "success or failure"
+Jun 23 22:04:16.836: INFO: Pod "downward-api-d7ed4601-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.111308ms
+Jun 23 22:04:18.839: INFO: Pod "downward-api-d7ed4601-9602-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006705459s
+Jun 23 22:04:20.843: INFO: Pod "downward-api-d7ed4601-9602-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010223528s
+STEP: Saw pod success
+Jun 23 22:04:20.843: INFO: Pod "downward-api-d7ed4601-9602-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:04:20.845: INFO: Trying to get logs from node minion pod downward-api-d7ed4601-9602-11e9-9086-ba438756bc32 container dapi-container: 
+STEP: delete the pod
+Jun 23 22:04:20.864: INFO: Waiting for pod downward-api-d7ed4601-9602-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:04:20.867: INFO: Pod downward-api-d7ed4601-9602-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:04:20.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-w628f" for this suite.
+Jun 23 22:04:26.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:04:26.924: INFO: namespace: e2e-tests-downward-api-w628f, resource: bindings, ignored listing per whitelist
+Jun 23 22:04:26.966: INFO: namespace e2e-tests-downward-api-w628f deletion completed in 6.095194723s
+
+• [SLOW TEST:10.220 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[k8s.io] KubeletManagedEtcHosts 
+  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:04:26.967: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Setting up the test
+STEP: Creating hostNetwork=false pod
+STEP: Creating hostNetwork=true pod
+STEP: Running the test
+STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
+Jun 23 22:04:33.069: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:33.069: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:33.269: INFO: Exec stderr: ""
+Jun 23 22:04:33.269: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:33.269: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:33.474: INFO: Exec stderr: ""
+Jun 23 22:04:33.474: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:33.474: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:33.683: INFO: Exec stderr: ""
+Jun 23 22:04:33.683: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:33.683: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:33.884: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
+Jun 23 22:04:33.884: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:33.884: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:34.088: INFO: Exec stderr: ""
+Jun 23 22:04:34.088: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:34.088: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:34.273: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
+Jun 23 22:04:34.273: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:34.273: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:34.460: INFO: Exec stderr: ""
+Jun 23 22:04:34.460: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:34.460: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:34.655: INFO: Exec stderr: ""
+Jun 23 22:04:34.655: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:34.655: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:34.866: INFO: Exec stderr: ""
+Jun 23 22:04:34.866: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-tests-e2e-kubelet-etc-hosts-nldkg PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:04:34.866: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:04:35.065: INFO: Exec stderr: ""
+[AfterEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:04:35.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-e2e-kubelet-etc-hosts-nldkg" for this suite.
+Jun 23 22:05:25.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:05:25.088: INFO: namespace: e2e-tests-e2e-kubelet-etc-hosts-nldkg, resource: bindings, ignored listing per whitelist
+Jun 23 22:05:25.163: INFO: namespace e2e-tests-e2e-kubelet-etc-hosts-nldkg deletion completed in 50.094152742s
+
+• [SLOW TEST:58.197 seconds]
+[k8s.io] KubeletManagedEtcHosts
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should test kubelet managed /etc/hosts file [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Proxy server 
+  should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:05:25.164: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should support proxy with --port 0  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: starting the proxy server
+Jun 23 22:05:25.243: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-365229432 proxy -p 0 --disable-filter'
+STEP: curling proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:05:25.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-rt7j9" for this suite.
+Jun 23 22:05:31.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:05:31.451: INFO: namespace: e2e-tests-kubectl-rt7j9, resource: bindings, ignored listing per whitelist
+Jun 23 22:05:31.451: INFO: namespace e2e-tests-kubectl-rt7j9 deletion completed in 6.090016965s
+
+• [SLOW TEST:6.287 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Proxy server
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should support proxy with --port 0  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:05:31.451: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jun 23 22:05:39.550: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:39.553: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:41.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:41.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:43.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:43.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:45.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:45.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:47.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:47.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:49.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:49.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:51.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:51.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:53.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:53.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:55.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:55.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:57.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:57.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:05:59.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:05:59.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:06:01.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:06:01.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:06:03.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:06:03.557: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 23 22:06:05.553: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 23 22:06:05.557: INFO: Pod pod-with-prestop-exec-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:06:05.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-lifecycle-hook-5zsm7" for this suite.
+Jun 23 22:06:27.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:06:27.647: INFO: namespace: e2e-tests-container-lifecycle-hook-5zsm7, resource: bindings, ignored listing per whitelist
+Jun 23 22:06:27.663: INFO: namespace e2e-tests-container-lifecycle-hook-5zsm7 deletion completed in 22.090740618s
+
+• [SLOW TEST:56.213 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute prestop exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] Variable Expansion 
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:06:27.664: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename var-expansion
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test substitution in container's args
+Jun 23 22:06:27.740: INFO: Waiting up to 5m0s for pod "var-expansion-25f44301-9603-11e9-9086-ba438756bc32" in namespace "e2e-tests-var-expansion-l9xhz" to be "success or failure"
+Jun 23 22:06:27.743: INFO: Pod "var-expansion-25f44301-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.785504ms
+Jun 23 22:06:29.747: INFO: Pod "var-expansion-25f44301-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006587701s
+Jun 23 22:06:31.751: INFO: Pod "var-expansion-25f44301-9603-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010183227s
+STEP: Saw pod success
+Jun 23 22:06:31.751: INFO: Pod "var-expansion-25f44301-9603-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:06:31.753: INFO: Trying to get logs from node minion pod var-expansion-25f44301-9603-11e9-9086-ba438756bc32 container dapi-container: 
+STEP: delete the pod
+Jun 23 22:06:31.770: INFO: Waiting for pod var-expansion-25f44301-9603-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:06:31.776: INFO: Pod var-expansion-25f44301-9603-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [k8s.io] Variable Expansion
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:06:31.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-var-expansion-l9xhz" for this suite.
+Jun 23 22:06:37.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:06:37.841: INFO: namespace: e2e-tests-var-expansion-l9xhz, resource: bindings, ignored listing per whitelist
+Jun 23 22:06:37.870: INFO: namespace e2e-tests-var-expansion-l9xhz deletion completed in 6.090682462s
+
+• [SLOW TEST:10.207 seconds]
+[k8s.io] Variable Expansion
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should allow substituting values in a container's args [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:06:37.871: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0777 on tmpfs
+Jun 23 22:06:37.947: INFO: Waiting up to 5m0s for pod "pod-2c09c438-9603-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-2ng4r" to be "success or failure"
+Jun 23 22:06:37.950: INFO: Pod "pod-2c09c438-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.843119ms
+Jun 23 22:06:39.954: INFO: Pod "pod-2c09c438-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00625064s
+Jun 23 22:06:41.957: INFO: Pod "pod-2c09c438-9603-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009912775s
+STEP: Saw pod success
+Jun 23 22:06:41.957: INFO: Pod "pod-2c09c438-9603-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:06:41.960: INFO: Trying to get logs from node minion pod pod-2c09c438-9603-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 22:06:41.978: INFO: Waiting for pod pod-2c09c438-9603-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:06:41.981: INFO: Pod pod-2c09c438-9603-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:06:41.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-2ng4r" for this suite.
+Jun 23 22:06:47.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:06:48.014: INFO: namespace: e2e-tests-emptydir-2ng4r, resource: bindings, ignored listing per whitelist
+Jun 23 22:06:48.076: INFO: namespace e2e-tests-emptydir-2ng4r deletion completed in 6.09133723s
+
+• [SLOW TEST:10.205 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (non-root,0777,tmpfs) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:06:48.076: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48
+[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 22:07:14.164: INFO: Container started at 2019-06-23 22:06:49 +0000 UTC, pod became ready at 2019-06-23 22:07:13 +0000 UTC
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:07:14.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-probe-fl857" for this suite.
+Jun 23 22:07:36.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:07:36.228: INFO: namespace: e2e-tests-container-probe-fl857, resource: bindings, ignored listing per whitelist
+Jun 23 22:07:36.259: INFO: namespace e2e-tests-container-probe-fl857 deletion completed in 22.091146837s
+
+• [SLOW TEST:48.182 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:07:36.259: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 22:07:36.329: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: creating the pod
+STEP: submitting the pod to kubernetes
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:07:40.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-gpb7j" for this suite.
+Jun 23 22:08:26.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:08:26.678: INFO: namespace: e2e-tests-pods-gpb7j, resource: bindings, ignored listing per whitelist
+Jun 23 22:08:26.683: INFO: namespace e2e-tests-pods-gpb7j deletion completed in 46.098967189s
+
+• [SLOW TEST:50.424 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should support remote command execution over websockets [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:08:26.684: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+Jun 23 22:08:32.816: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 111
+	[quantile=0.9] = 36916
+	[quantile=0.99] = 49337
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 17114
+	[quantile=0.9] = 50260
+	[quantile=0.99] = 62670
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 9
+	[quantile=0.99] = 26
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 18
+	[quantile=0.9] = 29
+	[quantile=0.99] = 49
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 16
+	[quantile=0.9] = 24
+	[quantile=0.99] = 29
+For namespace_queue_latency_sum:
+	[] = 6101
+For namespace_queue_latency_count:
+	[] = 323
+For namespace_retries:
+	[] = 326
+For namespace_work_duration:
+	[quantile=0.5] = 176284
+	[quantile=0.9] = 215556
+	[quantile=0.99] = 297570
+For namespace_work_duration_sum:
+	[] = 41927633
+For namespace_work_duration_count:
+	[] = 323
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:08:32.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-t84z2" for this suite.
+Jun 23 22:08:38.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:08:38.896: INFO: namespace: e2e-tests-gc-t84z2, resource: bindings, ignored listing per whitelist
+Jun 23 22:08:38.913: INFO: namespace e2e-tests-gc-t84z2 deletion completed in 6.092713138s
+
+• [SLOW TEST:12.229 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:08:38.913: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Given a Pod with a 'name' label pod-adoption is created
+STEP: When a replication controller with a matching selector is created
+STEP: Then the orphan pod is adopted
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:08:44.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-replication-controller-k7tbg" for this suite.
+Jun 23 22:09:06.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:09:06.101: INFO: namespace: e2e-tests-replication-controller-k7tbg, resource: bindings, ignored listing per whitelist
+Jun 23 22:09:06.101: INFO: namespace e2e-tests-replication-controller-k7tbg deletion completed in 22.088636398s
+
+• [SLOW TEST:27.189 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
+  should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:09:06.102: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358
+[It] should support rolling-update to same image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 23 22:09:06.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:06.785: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 23 22:09:06.785: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+STEP: rolling-update to same image controller
+Jun 23 22:09:06.795: INFO: scanned /root for discovery docs: 
+Jun 23 22:09:06.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:22.582: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
+Jun 23 22:09:22.583: INFO: stdout: "Created e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9\nScaling up e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
+Jun 23 22:09:22.583: INFO: stdout: "Created e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9\nScaling up e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
+STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
+Jun 23 22:09:22.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:22.729: INFO: stderr: ""
+Jun 23 22:09:22.729: INFO: stdout: "e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9-tt7ns "
+Jun 23 22:09:22.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9-tt7ns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:22.864: INFO: stderr: ""
+Jun 23 22:09:22.864: INFO: stdout: "true"
+Jun 23 22:09:22.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9-tt7ns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:23.016: INFO: stderr: ""
+Jun 23 22:09:23.016: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
+Jun 23 22:09:23.016: INFO: e2e-test-nginx-rc-e1d153cd17740beaef89a30e51d781e9-tt7ns is verified up and running
+[AfterEach] [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364
+Jun 23 22:09:23.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete rc e2e-test-nginx-rc --namespace=e2e-tests-kubectl-8zznl'
+Jun 23 22:09:23.179: INFO: stderr: ""
+Jun 23 22:09:23.179: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:09:23.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-8zznl" for this suite.
+Jun 23 22:09:45.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:09:45.217: INFO: namespace: e2e-tests-kubectl-8zznl, resource: bindings, ignored listing per whitelist
+Jun 23 22:09:45.281: INFO: namespace e2e-tests-kubectl-8zznl deletion completed in 22.098711464s
+
+• [SLOW TEST:39.180 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl rolling-update
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should support rolling-update to same image  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:09:45.282: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-test-upd-9bbf42f3-9603-11e9-9086-ba438756bc32
+STEP: Creating the pod
+STEP: Updating configmap configmap-test-upd-9bbf42f3-9603-11e9-9086-ba438756bc32
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:10:57.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-configmap-ncrtd" for this suite.
+Jun 23 22:11:19.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:11:19.825: INFO: namespace: e2e-tests-configmap-ncrtd, resource: bindings, ignored listing per whitelist
+Jun 23 22:11:19.883: INFO: namespace e2e-tests-configmap-ncrtd deletion completed in 22.09148281s
+
+• [SLOW TEST:94.602 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] [sig-node] PreStop 
+  should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] [sig-node] PreStop
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:11:19.884: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename prestop
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating server pod server in namespace e2e-tests-prestop-8wpf7
+STEP: Waiting for pods to come up.
+STEP: Creating tester pod tester in namespace e2e-tests-prestop-8wpf7
+STEP: Deleting pre-stop pod
+Jun 23 22:11:34.991: INFO: Saw: {
+	"Hostname": "server",
+	"Sent": null,
+	"Received": {
+		"prestop": 1
+	},
+	"Errors": null,
+	"Log": [
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
+		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
+	],
+	"StillContactingPeers": true
+}
+STEP: Deleting the server pod
+[AfterEach] [k8s.io] [sig-node] PreStop
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:11:34.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-prestop-8wpf7" for this suite.
+Jun 23 22:12:15.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:12:15.089: INFO: namespace: e2e-tests-prestop-8wpf7, resource: bindings, ignored listing per whitelist
+Jun 23 22:12:15.093: INFO: namespace e2e-tests-prestop-8wpf7 deletion completed in 40.093004485s
+
+• [SLOW TEST:55.209 seconds]
+[k8s.io] [sig-node] PreStop
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should call prestop when killing a pod  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:12:15.093: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 22:12:15.172: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-ml9cc" to be "success or failure"
+Jun 23 22:12:15.174: INFO: Pod "downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87851ms
+Jun 23 22:12:17.178: INFO: Pod "downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006620513s
+Jun 23 22:12:19.182: INFO: Pod "downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010104707s
+STEP: Saw pod success
+Jun 23 22:12:19.182: INFO: Pod "downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:12:19.185: INFO: Trying to get logs from node minion pod downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 22:12:19.203: INFO: Waiting for pod downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:12:19.206: INFO: Pod downwardapi-volume-f50a0ff5-9603-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:12:19.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-ml9cc" for this suite.
+Jun 23 22:12:25.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:12:25.270: INFO: namespace: e2e-tests-projected-ml9cc, resource: bindings, ignored listing per whitelist
+Jun 23 22:12:25.300: INFO: namespace e2e-tests-projected-ml9cc deletion completed in 6.090627855s
+
+• [SLOW TEST:10.207 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:12:25.300: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: modifying the configmap a second time
+STEP: deleting the configmap
+STEP: creating a watch on configmaps from the resource version returned by the first update
+STEP: Expecting to observe notifications for all changes to the configmap after the first update
+Jun 23 22:12:25.394: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ng8fz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ng8fz/configmaps/e2e-watch-test-resource-version,UID:fb20508e-9603-11e9-8956-98039b22fc2c,ResourceVersion:12177,Generation:0,CreationTimestamp:2019-06-23 22:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun 23 22:12:25.395: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:e2e-tests-watch-ng8fz,SelfLink:/api/v1/namespaces/e2e-tests-watch-ng8fz/configmaps/e2e-watch-test-resource-version,UID:fb20508e-9603-11e9-8956-98039b22fc2c,ResourceVersion:12178,Generation:0,CreationTimestamp:2019-06-23 22:12:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:12:25.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-watch-ng8fz" for this suite.
+Jun 23 22:12:31.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:12:31.488: INFO: namespace: e2e-tests-watch-ng8fz, resource: bindings, ignored listing per whitelist
+Jun 23 22:12:31.488: INFO: namespace e2e-tests-watch-ng8fz deletion completed in 6.090187162s
+
+• [SLOW TEST:6.188 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should be able to start watching from a specific resource version [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected combined 
+  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected combined
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:12:31.489: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name configmap-projected-all-test-volume-fecfca4e-9603-11e9-9086-ba438756bc32
+STEP: Creating secret with name secret-projected-all-test-volume-fecfca1d-9603-11e9-9086-ba438756bc32
+STEP: Creating a pod to test Check all projections for projected volume plugin
+Jun 23 22:12:31.574: INFO: Waiting up to 5m0s for pod "projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-zvfns" to be "success or failure"
+Jun 23 22:12:31.577: INFO: Pod "projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.884679ms
+Jun 23 22:12:33.580: INFO: Pod "projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006349643s
+Jun 23 22:12:35.583: INFO: Pod "projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00982435s
+STEP: Saw pod success
+Jun 23 22:12:35.584: INFO: Pod "projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:12:35.586: INFO: Trying to get logs from node minion pod projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32 container projected-all-volume-test: 
+STEP: delete the pod
+Jun 23 22:12:35.604: INFO: Waiting for pod projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:12:35.609: INFO: Pod projected-volume-fecfc9b3-9603-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected combined
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:12:35.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-zvfns" for this suite.
+Jun 23 22:12:41.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:12:41.634: INFO: namespace: e2e-tests-projected-zvfns, resource: bindings, ignored listing per whitelist
+Jun 23 22:12:41.703: INFO: namespace e2e-tests-projected-zvfns deletion completed in 6.089922963s
+
+• [SLOW TEST:10.214 seconds]
+[sig-storage] Projected combined
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
+  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSS
+------------------------------
+[k8s.io] Kubelet when scheduling a busybox command in a pod 
+  should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:12:41.703: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubelet-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
+[It] should print the output to logs [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[AfterEach] [k8s.io] Kubelet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:12:45.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubelet-test-8cpx4" for this suite.
+Jun 23 22:13:35.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:13:35.890: INFO: namespace: e2e-tests-kubelet-test-8cpx4, resource: bindings, ignored listing per whitelist
+Jun 23 22:13:35.899: INFO: namespace e2e-tests-kubelet-test-8cpx4 deletion completed in 50.092750679s
+
+• [SLOW TEST:54.196 seconds]
+[k8s.io] Kubelet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  when scheduling a busybox command in a pod
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
+    should print the output to logs [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test when starting a container that exits 
+  should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:13:35.899: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:14:04.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-container-runtime-z842k" for this suite.
+Jun 23 22:14:10.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:14:10.196: INFO: namespace: e2e-tests-container-runtime-z842k, resource: bindings, ignored listing per whitelist
+Jun 23 22:14:10.260: INFO: namespace e2e-tests-container-runtime-z842k deletion completed in 6.091927751s
+
+• [SLOW TEST:34.361 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  blackbox test
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
+    when starting a container that exits
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+      should run with the expected status [NodeConformance] [Conformance]
+      /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:14:10.261: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-map-39af1e51-9604-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 22:14:10.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-zhzpb" to be "success or failure"
+Jun 23 22:14:10.344: INFO: Pod "pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782122ms
+Jun 23 22:14:12.347: INFO: Pod "pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006423897s
+Jun 23 22:14:14.351: INFO: Pod "pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010048801s
+STEP: Saw pod success
+Jun 23 22:14:14.351: INFO: Pod "pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:14:14.354: INFO: Trying to get logs from node minion pod pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 22:14:14.371: INFO: Waiting for pod pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:14:14.373: INFO: Pod pod-projected-configmaps-39af9a11-9604-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:14:14.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-zhzpb" for this suite.
+Jun 23 22:14:20.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:14:20.430: INFO: namespace: e2e-tests-projected-zhzpb, resource: bindings, ignored listing per whitelist
+Jun 23 22:14:20.469: INFO: namespace e2e-tests-projected-zhzpb deletion completed in 6.09207053s
+
+• [SLOW TEST:10.209 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:14:20.469: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0644 on node default medium
+Jun 23 22:14:20.547: INFO: Waiting up to 5m0s for pod "pod-3fc5019c-9604-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-wv8m9" to be "success or failure"
+Jun 23 22:14:20.550: INFO: Pod "pod-3fc5019c-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.726603ms
+Jun 23 22:14:22.554: INFO: Pod "pod-3fc5019c-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006336503s
+Jun 23 22:14:24.557: INFO: Pod "pod-3fc5019c-9604-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010144276s
+STEP: Saw pod success
+Jun 23 22:14:24.557: INFO: Pod "pod-3fc5019c-9604-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:14:24.560: INFO: Trying to get logs from node minion pod pod-3fc5019c-9604-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 22:14:24.578: INFO: Waiting for pod pod-3fc5019c-9604-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:14:24.581: INFO: Pod pod-3fc5019c-9604-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:14:24.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-wv8m9" for this suite.
+Jun 23 22:14:30.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:14:30.614: INFO: namespace: e2e-tests-emptydir-wv8m9, resource: bindings, ignored listing per whitelist
+Jun 23 22:14:30.679: INFO: namespace e2e-tests-emptydir-wv8m9 deletion completed in 6.093788817s
+
+• [SLOW TEST:10.210 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0644,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:14:30.679: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating replication controller my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32
+Jun 23 22:14:30.760: INFO: Pod name my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32: Found 0 pods out of 1
+Jun 23 22:14:35.765: INFO: Pod name my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32: Found 1 pods out of 1
+Jun 23 22:14:35.765: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32" are running
+Jun 23 22:14:35.768: INFO: Pod "my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32-hpp5p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 22:14:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 22:14:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 22:14:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-23 22:14:30 +0000 UTC Reason: Message:}])
+Jun 23 22:14:35.768: INFO: Trying to dial the pod
+Jun 23 22:14:40.781: INFO: Controller my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32: Got expected result from replica 1 [my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32-hpp5p]: "my-hostname-basic-45db13ea-9604-11e9-9086-ba438756bc32-hpp5p", 1 of 1 required successes so far
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:14:40.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-replication-controller-xf9cl" for this suite.
+Jun 23 22:14:46.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:14:46.820: INFO: namespace: e2e-tests-replication-controller-xf9cl, resource: bindings, ignored listing per whitelist
+Jun 23 22:14:46.878: INFO: namespace e2e-tests-replication-controller-xf9cl deletion completed in 6.093301685s
+
+• [SLOW TEST:16.199 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should serve a basic image on each replica with a public image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:14:46.878: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-map-4f82df48-9604-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 22:14:46.961: INFO: Waiting up to 5m0s for pod "pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-ppj8r" to be "success or failure"
+Jun 23 22:14:46.964: INFO: Pod "pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.546508ms
+Jun 23 22:14:48.968: INFO: Pod "pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006006205s
+Jun 23 22:14:50.971: INFO: Pod "pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009825963s
+STEP: Saw pod success
+Jun 23 22:14:50.971: INFO: Pod "pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:14:50.974: INFO: Trying to get logs from node minion pod pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 22:14:50.992: INFO: Waiting for pod pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:14:50.994: INFO: Pod pod-secrets-4f836c4f-9604-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:14:50.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-ppj8r" for this suite.
+Jun 23 22:14:57.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:14:57.051: INFO: namespace: e2e-tests-secrets-ppj8r, resource: bindings, ignored listing per whitelist
+Jun 23 22:14:57.091: INFO: namespace e2e-tests-secrets-ppj8r deletion completed in 6.092815538s
+
+• [SLOW TEST:10.213 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:14:57.091: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating secret with name secret-test-55992199-9604-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume secrets
+Jun 23 22:14:57.174: INFO: Waiting up to 5m0s for pod "pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-2psdt" to be "success or failure"
+Jun 23 22:14:57.177: INFO: Pod "pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.822958ms
+Jun 23 22:14:59.180: INFO: Pod "pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006438483s
+Jun 23 22:15:01.184: INFO: Pod "pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010212084s
+STEP: Saw pod success
+Jun 23 22:15:01.184: INFO: Pod "pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:15:01.187: INFO: Trying to get logs from node minion pod pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32 container secret-volume-test: 
+STEP: delete the pod
+Jun 23 22:15:01.205: INFO: Waiting for pod pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:15:01.210: INFO: Pod pod-secrets-5599a2eb-9604-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:15:01.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-secrets-2psdt" for this suite.
+Jun 23 22:15:07.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:15:07.234: INFO: namespace: e2e-tests-secrets-2psdt, resource: bindings, ignored listing per whitelist
+Jun 23 22:15:07.309: INFO: namespace e2e-tests-secrets-2psdt deletion completed in 6.095780011s
+
+• [SLOW TEST:10.218 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Burst scaling should run to completion even with unhealthy pods [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:15:07.309: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace e2e-tests-statefulset-wnj75
+[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating stateful set ss in namespace e2e-tests-statefulset-wnj75
+STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-wnj75
+Jun 23 22:15:07.393: INFO: Found 0 stateful pods, waiting for 1
+Jun 23 22:15:17.397: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
+Jun 23 22:15:17.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 22:15:17.766: INFO: stderr: ""
+Jun 23 22:15:17.766: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 22:15:17.766: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 23 22:15:17.770: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Jun 23 22:15:27.774: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun 23 22:15:27.774: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 22:15:27.787: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:27.787: INFO: ss-0  minion  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:27.787: INFO: 
+Jun 23 22:15:27.787: INFO: StatefulSet ss has not reached scale 3, at 1
+Jun 23 22:15:28.791: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996774646s
+Jun 23 22:15:29.796: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992428259s
+Jun 23 22:15:30.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988034858s
+Jun 23 22:15:31.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983484903s
+Jun 23 22:15:32.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97921992s
+Jun 23 22:15:33.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.974957486s
+Jun 23 22:15:34.817: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.97059603s
+Jun 23 22:15:35.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966261744s
+Jun 23 22:15:36.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.12006ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-wnj75
+Jun 23 22:15:37.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:15:38.183: INFO: stderr: ""
+Jun 23 22:15:38.183: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 23 22:15:38.183: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 23 22:15:38.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:15:38.524: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
+Jun 23 22:15:38.524: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 23 22:15:38.524: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 23 22:15:38.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:15:38.873: INFO: stderr: "mv: can't rename '/tmp/index.html': No such file or directory\n"
+Jun 23 22:15:38.873: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 23 22:15:38.873: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 23 22:15:38.877: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 22:15:38.877: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun 23 22:15:38.877: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Scale down will not halt with unhealthy stateful pod
+Jun 23 22:15:38.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 22:15:39.230: INFO: stderr: ""
+Jun 23 22:15:39.230: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 22:15:39.230: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 23 22:15:39.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 22:15:39.570: INFO: stderr: ""
+Jun 23 22:15:39.570: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 22:15:39.570: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 23 22:15:39.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 23 22:15:39.910: INFO: stderr: ""
+Jun 23 22:15:39.910: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 23 22:15:39.910: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 23 22:15:39.910: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 22:15:39.918: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun 23 22:15:39.918: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Jun 23 22:15:39.918: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Jun 23 22:15:39.929: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:39.929: INFO: ss-0  minion  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:39.929: INFO: ss-1  minion  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:39.929: INFO: ss-2  minion  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:39.929: INFO: 
+Jun 23 22:15:39.929: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:40.933: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:40.933: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:40.933: INFO: ss-1  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:40.933: INFO: ss-2  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:40.933: INFO: 
+Jun 23 22:15:40.933: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:41.938: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:41.938: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:41.938: INFO: ss-1  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:41.938: INFO: ss-2  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:41.938: INFO: 
+Jun 23 22:15:41.938: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:42.942: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:42.943: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:42.943: INFO: ss-1  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:42.943: INFO: ss-2  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:42.943: INFO: 
+Jun 23 22:15:42.943: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:43.947: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:43.947: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:43.947: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:43.947: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:43.947: INFO: 
+Jun 23 22:15:43.947: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:44.951: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:44.951: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:44.951: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:44.951: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:44.951: INFO: 
+Jun 23 22:15:44.951: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:45.956: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:45.956: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:45.956: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:45.956: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:45.956: INFO: 
+Jun 23 22:15:45.956: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:46.960: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:46.960: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:46.960: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:46.960: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:46.960: INFO: 
+Jun 23 22:15:46.960: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:47.965: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:47.965: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:47.965: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:47.965: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:47.965: INFO: 
+Jun 23 22:15:47.965: INFO: StatefulSet ss has not reached scale 0, at 3
+Jun 23 22:15:48.969: INFO: POD   NODE    PHASE    GRACE  CONDITIONS
+Jun 23 22:15:48.969: INFO: ss-0  minion  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:07 +0000 UTC  }]
+Jun 23 22:15:48.969: INFO: ss-1  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:48.969: INFO: ss-2  minion  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 22:15:27 +0000 UTC  }]
+Jun 23 22:15:48.969: INFO: 
+Jun 23 22:15:48.969: INFO: StatefulSet ss has not reached scale 0, at 3
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-wnj75
+Jun 23 22:15:49.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:15:50.163: INFO: rc: 1
+Jun 23 22:15:50.163: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
+ []  0xc0027de720 exit status 1   true [0xc001976930 0xc001976948 0xc001976960] [0xc001976930 0xc001976948 0xc001976960] [0xc001976940 0xc001976958] [0x92f8e0 0x92f8e0] 0xc001d21d40 }:
+Command stdout:
+
+stderr:
+error: unable to upgrade connection: container not found ("nginx")
+
+error:
+exit status 1
+
+Jun 23 22:16:00.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:00.293: INFO: rc: 1
+Jun 23 22:16:00.293: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc002852030 exit status 1   true [0xc000cd28f0 0xc000cd2930 0xc000cd2980] [0xc000cd28f0 0xc000cd2930 0xc000cd2980] [0xc000cd2918 0xc000cd2968] [0x92f8e0 0x92f8e0] 0xc001ce7bc0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:16:10.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:10.409: INFO: rc: 1
+Jun 23 22:16:10.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccdfb0 exit status 1   true [0xc000719768 0xc0007197a0 0xc0007197f0] [0xc000719768 0xc0007197a0 0xc0007197f0] [0xc000719798 0xc0007197b8] [0x92f8e0 0x92f8e0] 0xc0018b4e40 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:16:20.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:20.535: INFO: rc: 1
+Jun 23 22:16:20.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001669290 exit status 1   true [0xc00104e480 0xc00104e498 0xc00104e4b0] [0xc00104e480 0xc00104e498 0xc00104e4b0] [0xc00104e490 0xc00104e4a8] [0x92f8e0 0x92f8e0] 0xc001c55da0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:16:30.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:30.645: INFO: rc: 1
+Jun 23 22:16:30.645: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001669680 exit status 1   true [0xc00104e4b8 0xc00104e4d0 0xc00104e4e8] [0xc00104e4b8 0xc00104e4d0 0xc00104e4e8] [0xc00104e4c8 0xc00104e4e0] [0x92f8e0 0x92f8e0] 0xc001900ae0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:16:40.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:40.769: INFO: rc: 1
+Jun 23 22:16:40.769: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001669a10 exit status 1   true [0xc00104e4f0 0xc00104e510 0xc00104e528] [0xc00104e4f0 0xc00104e510 0xc00104e528] [0xc00104e508 0xc00104e520] [0x92f8e0 0x92f8e0] 0xc0019018c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:16:50.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:16:50.901: INFO: rc: 1
+Jun 23 22:16:50.901: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001602660 exit status 1   true [0xc000719830 0xc0007198a8 0xc0007198e0] [0xc000719830 0xc0007198a8 0xc0007198e0] [0xc000719888 0xc0007198d8] [0x92f8e0 0x92f8e0] 0xc0018b55c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:00.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:01.017: INFO: rc: 1
+Jun 23 22:17:01.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0028523f0 exit status 1   true [0xc000cd29a0 0xc000cd29c8 0xc000cd29e0] [0xc000cd29a0 0xc000cd29c8 0xc000cd29e0] [0xc000cd29c0 0xc000cd29d8] [0x92f8e0 0x92f8e0] 0xc001b3e240 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:11.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:11.137: INFO: rc: 1
+Jun 23 22:17:11.137: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc002852750 exit status 1   true [0xc000cd29f8 0xc000cd2a30 0xc000cd2a48] [0xc000cd29f8 0xc000cd2a30 0xc000cd2a48] [0xc000cd2a18 0xc000cd2a40] [0x92f8e0 0x92f8e0] 0xc001aac660 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:21.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:21.257: INFO: rc: 1
+Jun 23 22:17:21.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017da3c0 exit status 1   true [0xc0004e20c8 0xc0007181f8 0xc000718268] [0xc0004e20c8 0xc0007181f8 0xc000718268] [0xc000718048 0xc000718228] [0x92f8e0 0x92f8e0] 0xc0015ea240 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:31.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:31.361: INFO: rc: 1
+Jun 23 22:17:31.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0014b6420 exit status 1   true [0xc00104e000 0xc00104e030 0xc00104e048] [0xc00104e000 0xc00104e030 0xc00104e048] [0xc00104e028 0xc00104e040] [0x92f8e0 0x92f8e0] 0xc001c544e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:41.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:41.466: INFO: rc: 1
+Jun 23 22:17:41.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017da780 exit status 1   true [0xc000718270 0xc000718300 0xc0007183a0] [0xc000718270 0xc000718300 0xc0007183a0] [0xc0007182f8 0xc000718360] [0x92f8e0 0x92f8e0] 0xc0015ea600 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:17:51.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:17:51.588: INFO: rc: 1
+Jun 23 22:17:51.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccc390 exit status 1   true [0xc000cd2020 0xc000cd2048 0xc000cd2088] [0xc000cd2020 0xc000cd2048 0xc000cd2088] [0xc000cd2040 0xc000cd2070] [0x92f8e0 0x92f8e0] 0xc001ce6c60 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:01.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:01.713: INFO: rc: 1
+Jun 23 22:18:01.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017aa3f0 exit status 1   true [0xc001976000 0xc001976018 0xc001976030] [0xc001976000 0xc001976018 0xc001976030] [0xc001976010 0xc001976028] [0x92f8e0 0x92f8e0] 0xc001e063c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:11.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:11.826: INFO: rc: 1
+Jun 23 22:18:11.826: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccc720 exit status 1   true [0xc000cd2090 0xc000cd20c0 0xc000cd20e8] [0xc000cd2090 0xc000cd20c0 0xc000cd20e8] [0xc000cd20a0 0xc000cd20e0] [0x92f8e0 0x92f8e0] 0xc001ce7440 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:21.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:21.932: INFO: rc: 1
+Jun 23 22:18:21.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001cccdb0 exit status 1   true [0xc000cd20f0 0xc000cd2130 0xc000cd2180] [0xc000cd20f0 0xc000cd2130 0xc000cd2180] [0xc000cd2128 0xc000cd2168] [0x92f8e0 0x92f8e0] 0xc001ce7980 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:31.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:32.057: INFO: rc: 1
+Jun 23 22:18:32.057: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017dab40 exit status 1   true [0xc0007183e0 0xc000718500 0xc000718598] [0xc0007183e0 0xc000718500 0xc000718598] [0xc000718400 0xc000718580] [0x92f8e0 0x92f8e0] 0xc0015ea9c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:42.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:42.177: INFO: rc: 1
+Jun 23 22:18:42.177: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccd7a0 exit status 1   true [0xc000cd2188 0xc000cd21a0 0xc000cd21e0] [0xc000cd2188 0xc000cd21a0 0xc000cd21e0] [0xc000cd2198 0xc000cd21c8] [0x92f8e0 0x92f8e0] 0xc001ce7e00 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:18:52.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:18:52.305: INFO: rc: 1
+Jun 23 22:18:52.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0014b6b70 exit status 1   true [0xc00104e050 0xc00104e088 0xc00104e0b0] [0xc00104e050 0xc00104e088 0xc00104e0b0] [0xc00104e080 0xc00104e0a8] [0x92f8e0 0x92f8e0] 0xc001c54ba0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:02.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:02.440: INFO: rc: 1
+Jun 23 22:19:02.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017aa7e0 exit status 1   true [0xc001976038 0xc001976050 0xc001976078] [0xc001976038 0xc001976050 0xc001976078] [0xc001976048 0xc001976070] [0x92f8e0 0x92f8e0] 0xc001e069c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:12.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:12.581: INFO: rc: 1
+Jun 23 22:19:12.581: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccc3c0 exit status 1   true [0xc0004e2128 0xc000cd2040 0xc000cd2070] [0xc0004e2128 0xc000cd2040 0xc000cd2070] [0xc000cd2038 0xc000cd2050] [0x92f8e0 0x92f8e0] 0xc001ce6660 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:22.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:22.716: INFO: rc: 1
+Jun 23 22:19:22.716: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccc780 exit status 1   true [0xc000cd2088 0xc000cd20a0 0xc000cd20e0] [0xc000cd2088 0xc000cd20a0 0xc000cd20e0] [0xc000cd2098 0xc000cd20d8] [0x92f8e0 0x92f8e0] 0xc001ce7020 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:32.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:32.818: INFO: rc: 1
+Jun 23 22:19:32.818: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccce70 exit status 1   true [0xc000cd20e8 0xc000cd2128 0xc000cd2168] [0xc000cd20e8 0xc000cd2128 0xc000cd2168] [0xc000cd2110 0xc000cd2148] [0x92f8e0 0x92f8e0] 0xc001ce7740 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:42.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:42.924: INFO: rc: 1
+Jun 23 22:19:42.924: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccd860 exit status 1   true [0xc000cd2180 0xc000cd2198 0xc000cd21c8] [0xc000cd2180 0xc000cd2198 0xc000cd21c8] [0xc000cd2190 0xc000cd21a8] [0x92f8e0 0x92f8e0] 0xc001ce7c20 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:19:52.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:19:53.048: INFO: rc: 1
+Jun 23 22:19:53.049: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017aa420 exit status 1   true [0xc00104e000 0xc00104e030 0xc00104e048] [0xc00104e000 0xc00104e030 0xc00104e048] [0xc00104e028 0xc00104e040] [0x92f8e0 0x92f8e0] 0xc001c54720 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:03.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:03.169: INFO: rc: 1
+Jun 23 22:20:03.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017da3f0 exit status 1   true [0xc000718048 0xc000718228 0xc0007182a8] [0xc000718048 0xc000718228 0xc0007182a8] [0xc000718200 0xc000718270] [0x92f8e0 0x92f8e0] 0xc0015ea240 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:13.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:13.306: INFO: rc: 1
+Jun 23 22:20:13.306: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0014b65d0 exit status 1   true [0xc001976000 0xc001976018 0xc001976030] [0xc001976000 0xc001976018 0xc001976030] [0xc001976010 0xc001976028] [0x92f8e0 0x92f8e0] 0xc001e063c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:23.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:23.429: INFO: rc: 1
+Jun 23 22:20:23.430: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017aa810 exit status 1   true [0xc00104e050 0xc00104e088 0xc00104e0b0] [0xc00104e050 0xc00104e088 0xc00104e0b0] [0xc00104e080 0xc00104e0a8] [0x92f8e0 0x92f8e0] 0xc001c54c60 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:33.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:33.544: INFO: rc: 1
+Jun 23 22:20:33.544: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc001ccdda0 exit status 1   true [0xc000cd21e0 0xc000cd21f8 0xc000cd2248] [0xc000cd21e0 0xc000cd21f8 0xc000cd2248] [0xc000cd21f0 0xc000cd2230] [0x92f8e0 0x92f8e0] 0xc00238a060 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:43.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:43.671: INFO: rc: 1
+Jun 23 22:20:43.671: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
+ []  0xc0017da810 exit status 1   true [0xc0007182f8 0xc000718360 0xc0007183e8] [0xc0007182f8 0xc000718360 0xc0007183e8] [0xc000718340 0xc0007183e0] [0x92f8e0 0x92f8e0] 0xc0015ea600 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-0" not found
+
+error:
+exit status 1
+
+Jun 23 22:20:53.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-wnj75 ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 23 22:20:53.798: INFO: rc: 1
+Jun 23 22:20:53.798: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
+Jun 23 22:20:53.798: INFO: Scaling statefulset ss to 0
+Jun 23 22:20:53.809: INFO: Waiting for statefulset status.replicas updated to 0
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 23 22:20:53.812: INFO: Deleting all statefulset in ns e2e-tests-statefulset-wnj75
+Jun 23 22:20:53.815: INFO: Scaling statefulset ss to 0
+Jun 23 22:20:53.824: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 23 22:20:53.826: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:20:53.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-statefulset-wnj75" for this suite.
+Jun 23 22:20:59.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:20:59.877: INFO: namespace: e2e-tests-statefulset-wnj75, resource: bindings, ignored listing per whitelist
+Jun 23 22:20:59.931: INFO: namespace e2e-tests-statefulset-wnj75 deletion completed in 6.09069624s
+
+• [SLOW TEST:352.621 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    Burst scaling should run to completion even with unhealthy pods [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:20:59.931: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
+STEP: Gathering metrics
+Jun 23 22:21:30.563: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 11
+	[quantile=0.9] = 11
+	[quantile=0.99] = 11
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 7
+	[quantile=0.9] = 7
+	[quantile=0.99] = 7
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = 24
+	[quantile=0.9] = 24
+	[quantile=0.99] = 24
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = 32102
+	[quantile=0.9] = 32102
+	[quantile=0.99] = 32102
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 8
+	[quantile=0.99] = 16
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 20
+	[quantile=0.9] = 28
+	[quantile=0.99] = 45
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 18
+	[quantile=0.9] = 24
+	[quantile=0.99] = 27
+For namespace_queue_latency_sum:
+	[] = 7139
+For namespace_queue_latency_count:
+	[] = 379
+For namespace_retries:
+	[] = 383
+For namespace_work_duration:
+	[quantile=0.5] = 169107
+	[quantile=0.9] = 218074
+	[quantile=0.99] = 258836
+For namespace_work_duration_sum:
+	[] = 49797919
+For namespace_work_duration_count:
+	[] = 379
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:21:30.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-2t8hw" for this suite.
+Jun 23 22:21:36.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:21:36.600: INFO: namespace: e2e-tests-gc-2t8hw, resource: bindings, ignored listing per whitelist
+Jun 23 22:21:36.658: INFO: namespace e2e-tests-gc-2t8hw deletion completed in 6.091327148s
+
+• [SLOW TEST:36.727 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with secret pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:21:36.659: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with secret pod [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-secret-7tmm
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 23 22:21:36.743: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7tmm" in namespace "e2e-tests-subpath-qvmzg" to be "success or failure"
+Jun 23 22:21:36.746: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Pending", Reason="", readiness=false. Elapsed: 3.065267ms
+Jun 23 22:21:38.750: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006858064s
+Jun 23 22:21:40.753: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 4.010345742s
+Jun 23 22:21:42.757: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 6.014118787s
+Jun 23 22:21:44.761: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 8.017969445s
+Jun 23 22:21:46.764: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 10.021548353s
+Jun 23 22:21:48.768: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 12.025254944s
+Jun 23 22:21:50.772: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 14.028782153s
+Jun 23 22:21:52.776: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 16.032654435s
+Jun 23 22:21:54.779: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 18.03622315s
+Jun 23 22:21:56.783: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 20.039660616s
+Jun 23 22:21:58.786: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 22.043347706s
+Jun 23 22:22:00.790: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Running", Reason="", readiness=false. Elapsed: 24.046949549s
+Jun 23 22:22:02.793: INFO: Pod "pod-subpath-test-secret-7tmm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.050573437s
+STEP: Saw pod success
+Jun 23 22:22:02.793: INFO: Pod "pod-subpath-test-secret-7tmm" satisfied condition "success or failure"
+Jun 23 22:22:02.797: INFO: Trying to get logs from node minion pod pod-subpath-test-secret-7tmm container test-container-subpath-secret-7tmm: 
+STEP: delete the pod
+Jun 23 22:22:02.816: INFO: Waiting for pod pod-subpath-test-secret-7tmm to disappear
+Jun 23 22:22:02.819: INFO: Pod pod-subpath-test-secret-7tmm no longer exists
+STEP: Deleting pod pod-subpath-test-secret-7tmm
+Jun 23 22:22:02.819: INFO: Deleting pod "pod-subpath-test-secret-7tmm" in namespace "e2e-tests-subpath-qvmzg"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:22:02.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-qvmzg" for this suite.
+Jun 23 22:22:08.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:22:08.889: INFO: namespace: e2e-tests-subpath-qvmzg, resource: bindings, ignored listing per whitelist
+Jun 23 22:22:08.919: INFO: namespace e2e-tests-subpath-qvmzg deletion completed in 6.092871682s
+
+• [SLOW TEST:32.260 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with secret pod [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:22:08.919: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating configMap with name projected-configmap-test-volume-56fc762b-9605-11e9-9086-ba438756bc32
+STEP: Creating a pod to test consume configMaps
+Jun 23 22:22:08.999: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-m4cg7" to be "success or failure"
+Jun 23 22:22:09.002: INFO: Pod "pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.556356ms
+Jun 23 22:22:11.005: INFO: Pod "pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006035357s
+Jun 23 22:22:13.009: INFO: Pod "pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009765135s
+STEP: Saw pod success
+Jun 23 22:22:13.009: INFO: Pod "pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:22:13.012: INFO: Trying to get logs from node minion pod pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32 container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 23 22:22:13.030: INFO: Waiting for pod pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:22:13.036: INFO: Pod pod-projected-configmaps-56fd02b2-9605-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:22:13.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-m4cg7" for this suite.
+Jun 23 22:22:19.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:22:19.076: INFO: namespace: e2e-tests-projected-m4cg7, resource: bindings, ignored listing per whitelist
+Jun 23 22:22:19.133: INFO: namespace e2e-tests-projected-m4cg7 deletion completed in 6.093226374s
+
+• [SLOW TEST:10.214 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
+  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:22:19.133: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jun 23 22:22:19.210: INFO: Waiting up to 5m0s for pod "pod-5d1315c8-9605-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-xx7pv" to be "success or failure"
+Jun 23 22:22:19.213: INFO: Pod "pod-5d1315c8-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874742ms
+Jun 23 22:22:21.216: INFO: Pod "pod-5d1315c8-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006509496s
+Jun 23 22:22:23.220: INFO: Pod "pod-5d1315c8-9605-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009965656s
+STEP: Saw pod success
+Jun 23 22:22:23.220: INFO: Pod "pod-5d1315c8-9605-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:22:23.223: INFO: Trying to get logs from node minion pod pod-5d1315c8-9605-11e9-9086-ba438756bc32 container test-container: 
+STEP: delete the pod
+Jun 23 22:22:23.239: INFO: Waiting for pod pod-5d1315c8-9605-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:22:23.242: INFO: Pod pod-5d1315c8-9605-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:22:23.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-emptydir-xx7pv" for this suite.
+Jun 23 22:22:29.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:22:29.332: INFO: namespace: e2e-tests-emptydir-xx7pv, resource: bindings, ignored listing per whitelist
+Jun 23 22:22:29.341: INFO: namespace e2e-tests-emptydir-xx7pv deletion completed in 6.095182162s
+
+• [SLOW TEST:10.208 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
+  should support (root,0777,default) [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+S
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:22:29.341: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for all pods to be garbage collected
+STEP: Gathering metrics
+Jun 23 22:22:39.467: INFO: For apiserver_request_count:
+For apiserver_request_latencies_summary:
+For etcd_helper_cache_entry_count:
+For etcd_helper_cache_hit_count:
+For etcd_helper_cache_miss_count:
+For etcd_request_cache_add_latencies_summary:
+For etcd_request_cache_get_latencies_summary:
+For etcd_request_latencies_summary:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 11
+	[quantile=0.9] = 32
+	[quantile=0.99] = 32
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 18769
+	[quantile=0.9] = 19575
+	[quantile=0.99] = 19575
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = 24
+	[quantile=0.9] = 24
+	[quantile=0.99] = 24
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = 32102
+	[quantile=0.9] = 32102
+	[quantile=0.99] = 32102
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 9
+	[quantile=0.99] = 19
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 20
+	[quantile=0.9] = 28
+	[quantile=0.99] = 45
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 18
+	[quantile=0.9] = 24
+	[quantile=0.99] = 27
+For namespace_queue_latency_sum:
+	[] = 7284
+For namespace_queue_latency_count:
+	[] = 387
+For namespace_retries:
+	[] = 391
+For namespace_work_duration:
+	[quantile=0.5] = 169107
+	[quantile=0.9] = 224058
+	[quantile=0.99] = 258836
+For namespace_work_duration_sum:
+	[] = 50694399
+For namespace_work_duration_count:
+	[] = 387
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:22:39.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-gc-rh5k2" for this suite.
+Jun 23 22:22:45.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:22:45.534: INFO: namespace: e2e-tests-gc-rh5k2, resource: bindings, ignored listing per whitelist
+Jun 23 22:22:45.569: INFO: namespace e2e-tests-gc-rh5k2 deletion completed in 6.097779734s
+
+• [SLOW TEST:16.228 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:22:45.569: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132
+[It] should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating pod
+Jun 23 22:22:49.656: INFO: Pod pod-hostip-6cd48e88-9605-11e9-9086-ba438756bc32 has hostIP: 10.197.149.12
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:22:49.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pods-n79kv" for this suite.
+Jun 23 22:23:11.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:23:11.705: INFO: namespace: e2e-tests-pods-n79kv, resource: bindings, ignored listing per whitelist
+Jun 23 22:23:11.751: INFO: namespace e2e-tests-pods-n79kv deletion completed in 22.091370996s
+
+• [SLOW TEST:26.182 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+  should get a host IP [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod with mountPath of existing file [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:23:11.752: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating pod pod-subpath-test-configmap-p7mb
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 23 22:23:11.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p7mb" in namespace "e2e-tests-subpath-26n2p" to be "success or failure"
+Jun 23 22:23:11.845: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.841988ms
+Jun 23 22:23:13.848: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006474759s
+Jun 23 22:23:15.852: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 4.010289293s
+Jun 23 22:23:17.856: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 6.014177014s
+Jun 23 22:23:19.860: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 8.017969312s
+Jun 23 22:23:21.863: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 10.021246761s
+Jun 23 22:23:23.867: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 12.024870552s
+Jun 23 22:23:25.870: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 14.02835937s
+Jun 23 22:23:27.874: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 16.032132217s
+Jun 23 22:23:29.878: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 18.03601732s
+Jun 23 22:23:31.881: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 20.039481436s
+Jun 23 22:23:33.885: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Running", Reason="", readiness=false. Elapsed: 22.043090719s
+Jun 23 22:23:35.888: INFO: Pod "pod-subpath-test-configmap-p7mb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.046731311s
+STEP: Saw pod success
+Jun 23 22:23:35.888: INFO: Pod "pod-subpath-test-configmap-p7mb" satisfied condition "success or failure"
+Jun 23 22:23:35.891: INFO: Trying to get logs from node minion pod pod-subpath-test-configmap-p7mb container test-container-subpath-configmap-p7mb: 
+STEP: delete the pod
+Jun 23 22:23:35.910: INFO: Waiting for pod pod-subpath-test-configmap-p7mb to disappear
+Jun 23 22:23:35.916: INFO: Pod pod-subpath-test-configmap-p7mb no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-p7mb
+Jun 23 22:23:35.916: INFO: Deleting pod "pod-subpath-test-configmap-p7mb" in namespace "e2e-tests-subpath-26n2p"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:23:35.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-subpath-26n2p" for this suite.
+Jun 23 22:23:41.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:23:42.008: INFO: namespace: e2e-tests-subpath-26n2p, resource: bindings, ignored listing per whitelist
+Jun 23 22:23:42.012: INFO: namespace e2e-tests-subpath-26n2p deletion completed in 6.090189539s
+
+• [SLOW TEST:30.261 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with configmap pod with mountPath of existing file [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: http [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:23:42.013: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-6jrz8
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun 23 22:23:42.083: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun 23 22:24:06.127: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.251.128.7:8080/dial?request=hostName&protocol=http&host=10.251.128.6&port=8080&tries=1'] Namespace:e2e-tests-pod-network-test-6jrz8 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 23 22:24:06.127: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+Jun 23 22:24:06.364: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:24:06.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-pod-network-test-6jrz8" for this suite.
+Jun 23 22:24:24.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:24:24.417: INFO: namespace: e2e-tests-pod-network-test-6jrz8, resource: bindings, ignored listing per whitelist
+Jun 23 22:24:24.462: INFO: namespace e2e-tests-pod-network-test-6jrz8 deletion completed in 18.094019789s
+
+• [SLOW TEST:42.450 seconds]
+[sig-network] Networking
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: http [NodeConformance] [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSS
+------------------------------
+[sig-network] Proxy version v1 
+  should proxy logs on node using proxy subresource  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:24:24.463: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename proxy
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should proxy logs on node using proxy subresource  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+Jun 23 22:24:24.545: INFO: (0) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 6.98334ms)
+Jun 23 22:24:24.550: INFO: (1) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.588897ms)
+Jun 23 22:24:24.554: INFO: (2) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.642233ms)
+Jun 23 22:24:24.559: INFO: (3) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.39844ms)
+Jun 23 22:24:24.563: INFO: (4) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.286167ms)
+Jun 23 22:24:24.567: INFO: (5) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.34979ms)
+Jun 23 22:24:24.572: INFO: (6) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.316223ms)
+Jun 23 22:24:24.576: INFO: (7) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.383118ms)
+Jun 23 22:24:24.580: INFO: (8) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.197564ms)
+Jun 23 22:24:24.586: INFO: (9) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 5.232144ms)
+Jun 23 22:24:24.591: INFO: (10) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.902587ms)
+Jun 23 22:24:24.596: INFO: (11) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 5.026834ms)
+Jun 23 22:24:24.600: INFO: (12) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.588979ms)
+Jun 23 22:24:24.605: INFO: (13) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.423138ms)
+Jun 23 22:24:24.609: INFO: (14) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.243441ms)
+Jun 23 22:24:24.614: INFO: (15) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.46428ms)
+Jun 23 22:24:24.618: INFO: (16) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.356047ms)
+Jun 23 22:24:24.622: INFO: (17) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.142415ms)
+Jun 23 22:24:24.627: INFO: (18) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.271487ms)
+Jun 23 22:24:24.631: INFO: (19) /api/v1/nodes/minion/proxy/logs/: 
+alternatives.log
+apt/
+... (200; 4.605336ms)
+[AfterEach] version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:24:24.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-proxy-2s6g7" for this suite.
+Jun 23 22:24:30.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:24:30.723: INFO: namespace: e2e-tests-proxy-2s6g7, resource: bindings, ignored listing per whitelist
+Jun 23 22:24:30.725: INFO: namespace e2e-tests-proxy-2s6g7 deletion completed in 6.090724567s
+
+• [SLOW TEST:6.263 seconds]
+[sig-network] Proxy
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22
+  version v1
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56
+    should proxy logs on node using proxy subresource  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
+  should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:24:30.726: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[BeforeEach] [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399
+[It] should create a deployment from an image  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 23 22:24:30.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=e2e-tests-kubectl-fxttd'
+Jun 23 22:24:31.411: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 23 22:24:31.411: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n"
+STEP: verifying the deployment e2e-test-nginx-deployment was created
+STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
+[AfterEach] [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1404
+Jun 23 22:24:35.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete deployment e2e-test-nginx-deployment --namespace=e2e-tests-kubectl-fxttd'
+Jun 23 22:24:35.586: INFO: stderr: ""
+Jun 23 22:24:35.586: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:24:35.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-kubectl-fxttd" for this suite.
+Jun 23 22:24:57.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:24:57.620: INFO: namespace: e2e-tests-kubectl-fxttd, resource: bindings, ignored listing per whitelist
+Jun 23 22:24:57.688: INFO: namespace e2e-tests-kubectl-fxttd deletion completed in 22.097924396s
+
+• [SLOW TEST:26.962 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22
+  [k8s.io] Kubectl run deployment
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
+    should create a deployment from an image  [Conformance]
+    /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:24:57.688: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 22:24:57.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-lp4vd" to be "success or failure"
+Jun 23 22:24:57.769: INFO: Pod "downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897864ms
+Jun 23 22:24:59.772: INFO: Pod "downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006441817s
+Jun 23 22:25:01.776: INFO: Pod "downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010135653s
+STEP: Saw pod success
+Jun 23 22:25:01.776: INFO: Pod "downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:25:01.779: INFO: Trying to get logs from node minion pod downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 22:25:01.796: INFO: Waiting for pod downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:25:01.802: INFO: Pod downwardapi-volume-bb94b45c-9605-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:25:01.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-lp4vd" for this suite.
+Jun 23 22:25:07.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:25:07.887: INFO: namespace: e2e-tests-downward-api-lp4vd, resource: bindings, ignored listing per whitelist
+Jun 23 22:25:07.896: INFO: namespace e2e-tests-downward-api-lp4vd deletion completed in 6.090981123s
+
+• [SLOW TEST:10.208 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:25:07.897: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
+Jun 23 22:25:07.967: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun 23 22:25:07.974: INFO: Waiting for terminating namespaces to be deleted...
+Jun 23 22:25:07.977: INFO: 
+Logging pods the kubelet thinks is on node minion before test
+Jun 23 22:25:07.988: INFO: kube-proxy-vhhgh from kube-system started at 2019-06-23 21:00:40 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: coredns-f9d858bbd-xfbr4 from kube-system started at 2019-06-23 21:01:13 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container coredns ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: sonobuoy-e2e-job-b3c813a489584c2d from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container e2e ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: 	Container sonobuoy-worker ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: nginx-proxy-minion from kube-system started at  (0 container statuses recorded)
+Jun 23 22:25:07.988: INFO: weave-scope-agent-97sw9 from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container agent ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: weave-net-6ckzc from kube-system started at 2019-06-23 21:00:31 +0000 UTC (2 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container weave ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: 	Container weave-npc ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: weave-scope-app-554f7c7d88-5gkst from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container app ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: sonobuoy-systemd-logs-daemon-set-ad4137666e344d9a-fn99n from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun 23 22:25:07.988: INFO: 	Container systemd-logs ready: true, restart count 1
+Jun 23 22:25:07.988: INFO: nodelocaldns-dfk9g from kube-system started at 2019-06-23 21:01:14 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container node-cache ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: kubernetes-dashboard-7f5cd8fd66-hc5vw from kube-system started at 2019-06-23 21:01:17 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
+Jun 23 22:25:07.988: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-23 21:11:45 +0000 UTC (1 container statuses recorded)
+Jun 23 22:25:07.988: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+[It] validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Trying to schedule Pod with nonempty NodeSelector.
+STEP: Considering event: 
+Type = [Warning], Name = [restricted-pod.15aaf4300aa61e55], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:25:09.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-sched-pred-4frjg" for this suite.
+Jun 23 22:25:15.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:25:15.096: INFO: namespace: e2e-tests-sched-pred-4frjg, resource: bindings, ignored listing per whitelist
+Jun 23 22:25:15.106: INFO: namespace e2e-tests-sched-pred-4frjg deletion completed in 6.091404773s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70
+
+• [SLOW TEST:7.209 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
+  validates that NodeSelector is respected if not matching  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:25:15.106: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating the pod
+Jun 23 22:25:19.711: INFO: Successfully updated pod "annotationupdatec5f62756-9605-11e9-9086-ba438756bc32"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:25:21.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-projected-c5cr7" for this suite.
+Jun 23 22:25:43.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:25:43.769: INFO: namespace: e2e-tests-projected-c5cr7, resource: bindings, ignored listing per whitelist
+Jun 23 22:25:43.826: INFO: namespace e2e-tests-projected-c5cr7 deletion completed in 22.090227644s
+
+• [SLOW TEST:28.720 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Watchers 
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:25:43.827: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename watch
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating a watch on configmaps
+STEP: creating a new configmap
+STEP: modifying the configmap once
+STEP: closing the watch once it receives two notifications
+Jun 23 22:25:43.907: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q9pqj,SelfLink:/api/v1/namespaces/e2e-tests-watch-q9pqj/configmaps/e2e-watch-test-watch-closed,UID:d715b4ba-9605-11e9-8956-98039b22fc2c,ResourceVersion:14074,Generation:0,CreationTimestamp:2019-06-23 22:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},}
+Jun 23 22:25:43.907: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q9pqj,SelfLink:/api/v1/namespaces/e2e-tests-watch-q9pqj/configmaps/e2e-watch-test-watch-closed,UID:d715b4ba-9605-11e9-8956-98039b22fc2c,ResourceVersion:14075,Generation:0,CreationTimestamp:2019-06-23 22:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
+STEP: modifying the configmap a second time, while the watch is closed
+STEP: creating a new watch on configmaps from the last resource version observed by the first watch
+STEP: deleting the configmap
+STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
+Jun 23 22:25:43.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q9pqj,SelfLink:/api/v1/namespaces/e2e-tests-watch-q9pqj/configmaps/e2e-watch-test-watch-closed,UID:d715b4ba-9605-11e9-8956-98039b22fc2c,ResourceVersion:14076,Generation:0,CreationTimestamp:2019-06-23 22:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+Jun 23 22:25:43.920: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:e2e-tests-watch-q9pqj,SelfLink:/api/v1/namespaces/e2e-tests-watch-q9pqj/configmaps/e2e-watch-test-watch-closed,UID:d715b4ba-9605-11e9-8956-98039b22fc2c,ResourceVersion:14077,Generation:0,CreationTimestamp:2019-06-23 22:25:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
+[AfterEach] [sig-api-machinery] Watchers
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:25:43.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-watch-q9pqj" for this suite.
+Jun 23 22:25:49.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:25:49.974: INFO: namespace: e2e-tests-watch-q9pqj, resource: bindings, ignored listing per whitelist
+Jun 23 22:25:50.017: INFO: namespace e2e-tests-watch-q9pqj deletion completed in 6.092897466s
+
+• [SLOW TEST:6.190 seconds]
+[sig-api-machinery] Watchers
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:25:50.017: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: Creating a pod to test downward API volume plugin
+Jun 23 22:25:50.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-pxxr7" to be "success or failure"
+Jun 23 22:25:50.089: INFO: Pod "downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627561ms
+Jun 23 22:25:52.093: INFO: Pod "downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006219135s
+Jun 23 22:25:54.097: INFO: Pod "downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009969783s
+STEP: Saw pod success
+Jun 23 22:25:54.097: INFO: Pod "downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32" satisfied condition "success or failure"
+Jun 23 22:25:54.100: INFO: Trying to get logs from node minion pod downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32 container client-container: 
+STEP: delete the pod
+Jun 23 22:25:54.117: INFO: Waiting for pod downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32 to disappear
+Jun 23 22:25:54.120: INFO: Pod downwardapi-volume-dac45213-9605-11e9-9086-ba438756bc32 no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
+Jun 23 22:25:54.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-tests-downward-api-pxxr7" for this suite.
+Jun 23 22:26:00.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 23 22:26:00.186: INFO: namespace: e2e-tests-downward-api-pxxr7, resource: bindings, ignored listing per whitelist
+Jun 23 22:26:00.219: INFO: namespace e2e-tests-downward-api-pxxr7 deletion completed in 6.0957322s
+
+• [SLOW TEST:10.202 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide podname only [NodeConformance] [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Guestbook application 
+  should create and stop a working application  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
+STEP: Creating a kubernetes client
+Jun 23 22:26:00.219: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243
+[It] should create and stop a working application  [Conformance]
+  /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
+STEP: creating all guestbook components
+Jun 23 22:26:00.292: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-slave
+  labels:
+    app: redis
+    role: slave
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+  selector:
+    app: redis
+    role: slave
+    tier: backend
+
+Jun 23 22:26:00.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:00.538: INFO: stderr: ""
+Jun 23 22:26:00.538: INFO: stdout: "service/redis-slave created\n"
+Jun 23 22:26:00.538: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-master
+  labels:
+    app: redis
+    role: master
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+    targetPort: 6379
+  selector:
+    app: redis
+    role: master
+    tier: backend
+
+Jun 23 22:26:00.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:00.782: INFO: stderr: ""
+Jun 23 22:26:00.782: INFO: stdout: "service/redis-master created\n"
+Jun 23 22:26:00.782: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: frontend
+  labels:
+    app: guestbook
+    tier: frontend
+spec:
+  # if your cluster supports it, uncomment the following to automatically create
+  # an external load-balanced IP for the frontend service.
+  # type: LoadBalancer
+  ports:
+  - port: 80
+  selector:
+    app: guestbook
+    tier: frontend
+
+Jun 23 22:26:00.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:01.057: INFO: stderr: ""
+Jun 23 22:26:01.057: INFO: stdout: "service/frontend created\n"
+Jun 23 22:26:01.057: INFO: apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: frontend
+spec:
+  replicas: 3
+  template:
+    metadata:
+      labels:
+        app: guestbook
+        tier: frontend
+    spec:
+      containers:
+      - name: php-redis
+        image: gcr.io/google-samples/gb-frontend:v6
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access environment variables to find service host
+          # info, comment out the 'value: dns' line above, and uncomment the
+          # line below:
+          # value: env
+        ports:
+        - containerPort: 80
+
+Jun 23 22:26:01.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:01.317: INFO: stderr: ""
+Jun 23 22:26:01.317: INFO: stdout: "deployment.extensions/frontend created\n"
+Jun 23 22:26:01.317: INFO: apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: redis-master
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: master
+        tier: backend
+    spec:
+      containers:
+      - name: master
+        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 6379
+
+Jun 23 22:26:01.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:01.576: INFO: stderr: ""
+Jun 23 22:26:01.576: INFO: stdout: "deployment.extensions/redis-master created\n"
+Jun 23 22:26:01.577: INFO: apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: redis-slave
+spec:
+  replicas: 2
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: slave
+        tier: backend
+    spec:
+      containers:
+      - name: slave
+        image: gcr.io/google-samples/gb-redisslave:v3
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access an environment variable to find the master
+          # service's host, comment out the 'value: dns' line above, and
+          # uncomment the line below:
+          # value: env
+        ports:
+        - containerPort: 6379
+
+Jun 23 22:26:01.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-rhn8h'
+Jun 23 22:26:01.894: INFO: stderr: ""
+Jun 23 22:26:01.895: INFO: stdout: "deployment.extensions/redis-slave created\n"
+STEP: validating guestbook app
+Jun 23 22:26:01.895: INFO: Waiting for all frontend pods to be Running.
+Jun 23 22:26:21.946: INFO: Waiting for frontend to serve content.
+Jun 23 22:26:22.978: INFO: Failed to get response from guestbook. err: , response: 
+Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 +Stack trace: +#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) +#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) +#2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) +#3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() +#4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() +#5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
+ +Jun 23 22:26:28.000: INFO: Trying to add a new entry to the guestbook. +Jun 23 22:26:28.016: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Jun 23 22:26:28.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.155: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Jun 23 22:26:28.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.307: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.307: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 23 22:26:28.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.441: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 23 22:26:28.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.571: INFO: stdout: "deployment.extensions \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 23 22:26:28.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.711: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.711: INFO: stdout: "deployment.extensions \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 23 22:26:28.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-rhn8h' +Jun 23 22:26:28.823: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:26:28.823: INFO: stdout: "deployment.extensions \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:26:28.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-rhn8h" for this suite. +Jun 23 22:27:18.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:27:18.861: INFO: namespace: e2e-tests-kubectl-rhn8h, resource: bindings, ignored listing per whitelist +Jun 23 22:27:18.924: INFO: namespace e2e-tests-kubectl-rhn8h deletion completed in 50.097231787s + +• [SLOW TEST:78.705 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Guestbook application + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create and stop a working application [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:27:18.924: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name secret-test-0fc36b81-9606-11e9-9086-ba438756bc32 +STEP: Creating a pod to test consume secrets +Jun 23 22:27:19.032: INFO: Waiting up to 5m0s for pod "pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-7zb2b" to be "success or failure" +Jun 23 22:27:19.035: INFO: Pod "pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772715ms +Jun 23 22:27:21.038: INFO: Pod "pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006509457s +Jun 23 22:27:23.042: INFO: Pod "pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010382098s +STEP: Saw pod success +Jun 23 22:27:23.042: INFO: Pod "pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:27:23.045: INFO: Trying to get logs from node minion pod pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32 container secret-volume-test: +STEP: delete the pod +Jun 23 22:27:23.064: INFO: Waiting for pod pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:27:23.070: INFO: Pod pod-secrets-0fc84860-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:27:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-7zb2b" for this suite. +Jun 23 22:27:29.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:27:29.162: INFO: namespace: e2e-tests-secrets-7zb2b, resource: bindings, ignored listing per whitelist +Jun 23 22:27:29.168: INFO: namespace e2e-tests-secrets-7zb2b deletion completed in 6.09436153s +STEP: Destroying namespace "e2e-tests-secret-namespace-cbnhm" for this suite. +Jun 23 22:27:35.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:27:35.222: INFO: namespace: e2e-tests-secret-namespace-cbnhm, resource: bindings, ignored listing per whitelist +Jun 23 22:27:35.259: INFO: namespace e2e-tests-secret-namespace-cbnhm deletion completed in 6.090758478s + +• [SLOW TEST:16.334 seconds] +[sig-storage] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:27:35.259: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Jun 23 22:27:35.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-q4cvl" to be "success or failure" +Jun 23 22:27:35.339: INFO: Pod "downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.996989ms +Jun 23 22:27:37.343: INFO: Pod "downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006607445s +Jun 23 22:27:39.347: INFO: Pod "downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010322914s +STEP: Saw pod success +Jun 23 22:27:39.347: INFO: Pod "downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:27:39.350: INFO: Trying to get logs from node minion pod downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32 container client-container: +STEP: delete the pod +Jun 23 22:27:39.368: INFO: Waiting for pod downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:27:39.370: INFO: Pod downwardapi-volume-19801648-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:27:39.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-q4cvl" for this suite. +Jun 23 22:27:45.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:27:45.432: INFO: namespace: e2e-tests-downward-api-q4cvl, resource: bindings, ignored listing per whitelist +Jun 23 22:27:45.466: INFO: namespace e2e-tests-downward-api-q4cvl deletion completed in 6.091972181s + +• [SLOW TEST:10.207 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set DefaultMode on files [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:27:45.466: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating secret with name projected-secret-test-1f95b300-9606-11e9-9086-ba438756bc32 +STEP: Creating a pod to test consume secrets +Jun 23 22:27:45.548: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-t5hfl" to be "success or failure" +Jun 23 22:27:45.551: INFO: Pod "pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939236ms +Jun 23 22:27:47.555: INFO: Pod "pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006568068s +Jun 23 22:27:49.558: INFO: Pod "pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010250832s +STEP: Saw pod success +Jun 23 22:27:49.559: INFO: Pod "pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:27:49.561: INFO: Trying to get logs from node minion pod pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32 container secret-volume-test: +STEP: delete the pod +Jun 23 22:27:49.579: INFO: Waiting for pod pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:27:49.585: INFO: Pod pod-projected-secrets-1f963c4d-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:27:49.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-t5hfl" for this suite. +Jun 23 22:27:55.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:27:55.650: INFO: namespace: e2e-tests-projected-t5hfl, resource: bindings, ignored listing per whitelist +Jun 23 22:27:55.681: INFO: namespace e2e-tests-projected-t5hfl deletion completed in 6.092175434s + +• [SLOW TEST:10.215 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:27:55.682: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test use defaults +Jun 23 22:27:55.764: INFO: Waiting up to 5m0s for pod "client-containers-25ad121a-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-containers-gfvld" to be "success or failure" +Jun 23 22:27:55.766: INFO: Pod "client-containers-25ad121a-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.780652ms +Jun 23 22:27:57.770: INFO: Pod "client-containers-25ad121a-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006275496s +Jun 23 22:27:59.774: INFO: Pod "client-containers-25ad121a-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009971613s +STEP: Saw pod success +Jun 23 22:27:59.774: INFO: Pod "client-containers-25ad121a-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:27:59.776: INFO: Trying to get logs from node minion pod client-containers-25ad121a-9606-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 22:27:59.794: INFO: Waiting for pod client-containers-25ad121a-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:27:59.797: INFO: Pod client-containers-25ad121a-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:27:59.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-containers-gfvld" for this suite. +Jun 23 22:28:05.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:28:05.860: INFO: namespace: e2e-tests-containers-gfvld, resource: bindings, ignored listing per whitelist +Jun 23 22:28:05.893: INFO: namespace e2e-tests-containers-gfvld deletion completed in 6.092012207s + +• [SLOW TEST:10.211 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:28:05.893: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Performing setup for networking test in namespace e2e-tests-pod-network-test-7fh5z +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 23 22:28:05.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 23 22:28:24.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.251.128.6:8080/hostName | grep -v '^\s*$'] Namespace:e2e-tests-pod-network-test-7fh5z PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 23 22:28:24.011: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +Jun 23 22:28:24.234: INFO: Found all expected endpoints: [netserver-0] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:28:24.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pod-network-test-7fh5z" for this suite. +Jun 23 22:28:46.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:28:46.270: INFO: namespace: e2e-tests-pod-network-test-7fh5z, resource: bindings, ignored listing per whitelist +Jun 23 22:28:46.332: INFO: namespace e2e-tests-pod-network-test-7fh5z deletion completed in 22.094776376s + +• [SLOW TEST:40.439 seconds] +[sig-network] Networking +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: http [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:28:46.333: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 23 22:28:46.411: INFO: Waiting up to 5m0s for pod "pod-43dd1e90-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-fcfbf" to be "success or failure" +Jun 23 22:28:46.413: INFO: Pod "pod-43dd1e90-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773645ms +Jun 23 22:28:48.417: INFO: Pod "pod-43dd1e90-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006265145s +Jun 23 22:28:50.420: INFO: Pod "pod-43dd1e90-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00978843s +STEP: Saw pod success +Jun 23 22:28:50.420: INFO: Pod "pod-43dd1e90-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:28:50.423: INFO: Trying to get logs from node minion pod pod-43dd1e90-9606-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 22:28:50.441: INFO: Waiting for pod pod-43dd1e90-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:28:50.443: INFO: Pod pod-43dd1e90-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:28:50.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-fcfbf" for this suite. +Jun 23 22:28:56.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:28:56.507: INFO: namespace: e2e-tests-emptydir-fcfbf, resource: bindings, ignored listing per whitelist +Jun 23 22:28:56.539: INFO: namespace e2e-tests-emptydir-fcfbf deletion completed in 6.092323013s + +• [SLOW TEST:10.207 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run pod + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:28:56.539: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1527 +[It] should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 23 22:28:56.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=e2e-tests-kubectl-sfkr5' +Jun 23 22:28:56.732: INFO: stderr: "" +Jun 23 22:28:56.732: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod was created +[AfterEach] [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1532 +Jun 23 22:28:56.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-sfkr5' +Jun 23 22:29:03.769: INFO: stderr: "" +Jun 23 22:29:03.769: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:29:03.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-sfkr5" for this suite. +Jun 23 22:29:09.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:29:09.831: INFO: namespace: e2e-tests-kubectl-sfkr5, resource: bindings, ignored listing per whitelist +Jun 23 22:29:09.871: INFO: namespace e2e-tests-kubectl-sfkr5 deletion completed in 6.092563211s + +• [SLOW TEST:13.332 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl run pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should create a pod from an image when restart is Never [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:29:09.872: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Jun 23 22:29:13.971: INFO: Waiting up to 5m0s for pod "client-envvars-544a6543-9606-11e9-9086-ba438756bc32" in namespace "e2e-tests-pods-l2hv7" to be "success or failure" +Jun 23 22:29:13.974: INFO: Pod "client-envvars-544a6543-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.72591ms +Jun 23 22:29:15.977: INFO: Pod "client-envvars-544a6543-9606-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006242745s +Jun 23 22:29:17.981: INFO: Pod "client-envvars-544a6543-9606-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009753031s +STEP: Saw pod success +Jun 23 22:29:17.981: INFO: Pod "client-envvars-544a6543-9606-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:29:17.984: INFO: Trying to get logs from node minion pod client-envvars-544a6543-9606-11e9-9086-ba438756bc32 container env3cont: +STEP: delete the pod +Jun 23 22:29:18.002: INFO: Waiting for pod client-envvars-544a6543-9606-11e9-9086-ba438756bc32 to disappear +Jun 23 22:29:18.006: INFO: Pod client-envvars-544a6543-9606-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:29:18.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-l2hv7" for this suite. +Jun 23 22:30:08.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:30:08.091: INFO: namespace: e2e-tests-pods-l2hv7, resource: bindings, ignored listing per whitelist +Jun 23 22:30:08.100: INFO: namespace e2e-tests-pods-l2hv7 deletion completed in 50.090827137s + +• [SLOW TEST:58.229 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:30:08.100: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:295 +[It] should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a replication controller +Jun 23 22:30:08.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:08.405: INFO: stderr: "" +Jun 23 22:30:08.405: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 23 22:30:08.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:08.563: INFO: stderr: "" +Jun 23 22:30:08.563: INFO: stdout: "update-demo-nautilus-ncq5k update-demo-nautilus-sgmkt " +Jun 23 22:30:08.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-ncq5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:08.685: INFO: stderr: "" +Jun 23 22:30:08.685: INFO: stdout: "" +Jun 23 22:30:08.685: INFO: update-demo-nautilus-ncq5k is created but not running +Jun 23 22:30:13.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:13.847: INFO: stderr: "" +Jun 23 22:30:13.847: INFO: stdout: "update-demo-nautilus-ncq5k update-demo-nautilus-sgmkt " +Jun 23 22:30:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-ncq5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:13.984: INFO: stderr: "" +Jun 23 22:30:13.984: INFO: stdout: "true" +Jun 23 22:30:13.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-ncq5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:14.138: INFO: stderr: "" +Jun 23 22:30:14.138: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:14.138: INFO: validating pod update-demo-nautilus-ncq5k +Jun 23 22:30:14.144: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:14.144: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:14.144: INFO: update-demo-nautilus-ncq5k is verified up and running +Jun 23 22:30:14.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:14.284: INFO: stderr: "" +Jun 23 22:30:14.284: INFO: stdout: "true" +Jun 23 22:30:14.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:14.429: INFO: stderr: "" +Jun 23 22:30:14.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:14.429: INFO: validating pod update-demo-nautilus-sgmkt +Jun 23 22:30:14.436: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:14.436: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:14.436: INFO: update-demo-nautilus-sgmkt is verified up and running +STEP: scaling down the replication controller +Jun 23 22:30:14.439: INFO: scanned /root for discovery docs: +Jun 23 22:30:14.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:15.608: INFO: stderr: "" +Jun 23 22:30:15.608: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 23 22:30:15.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:15.768: INFO: stderr: "" +Jun 23 22:30:15.768: INFO: stdout: "update-demo-nautilus-ncq5k update-demo-nautilus-sgmkt " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jun 23 22:30:20.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:20.911: INFO: stderr: "" +Jun 23 22:30:20.911: INFO: stdout: "update-demo-nautilus-ncq5k update-demo-nautilus-sgmkt " +STEP: Replicas for name=update-demo: expected=1 actual=2 +Jun 23 22:30:25.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:26.039: INFO: stderr: "" +Jun 23 22:30:26.039: INFO: stdout: "update-demo-nautilus-sgmkt " +Jun 23 22:30:26.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:26.170: INFO: stderr: "" +Jun 23 22:30:26.170: INFO: stdout: "true" +Jun 23 22:30:26.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:26.295: INFO: stderr: "" +Jun 23 22:30:26.295: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:26.295: INFO: validating pod update-demo-nautilus-sgmkt +Jun 23 22:30:26.299: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:26.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:26.299: INFO: update-demo-nautilus-sgmkt is verified up and running +STEP: scaling up the replication controller +Jun 23 22:30:26.303: INFO: scanned /root for discovery docs: +Jun 23 22:30:26.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:27.457: INFO: stderr: "" +Jun 23 22:30:27.457: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 23 22:30:27.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:27.589: INFO: stderr: "" +Jun 23 22:30:27.589: INFO: stdout: "update-demo-nautilus-sgmkt update-demo-nautilus-tr85f " +Jun 23 22:30:27.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:27.735: INFO: stderr: "" +Jun 23 22:30:27.735: INFO: stdout: "true" +Jun 23 22:30:27.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:27.864: INFO: stderr: "" +Jun 23 22:30:27.864: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:27.864: INFO: validating pod update-demo-nautilus-sgmkt +Jun 23 22:30:27.869: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:27.869: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:27.869: INFO: update-demo-nautilus-sgmkt is verified up and running +Jun 23 22:30:27.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-tr85f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:28.001: INFO: stderr: "" +Jun 23 22:30:28.001: INFO: stdout: "" +Jun 23 22:30:28.001: INFO: update-demo-nautilus-tr85f is created but not running +Jun 23 22:30:33.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.154: INFO: stderr: "" +Jun 23 22:30:33.154: INFO: stdout: "update-demo-nautilus-sgmkt update-demo-nautilus-tr85f " +Jun 23 22:30:33.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.292: INFO: stderr: "" +Jun 23 22:30:33.292: INFO: stdout: "true" +Jun 23 22:30:33.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-sgmkt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.424: INFO: stderr: "" +Jun 23 22:30:33.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:33.425: INFO: validating pod update-demo-nautilus-sgmkt +Jun 23 22:30:33.429: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:33.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:33.429: INFO: update-demo-nautilus-sgmkt is verified up and running +Jun 23 22:30:33.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-tr85f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.563: INFO: stderr: "" +Jun 23 22:30:33.563: INFO: stdout: "true" +Jun 23 22:30:33.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods update-demo-nautilus-tr85f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.711: INFO: stderr: "" +Jun 23 22:30:33.711: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 23 22:30:33.711: INFO: validating pod update-demo-nautilus-tr85f +Jun 23 22:30:33.718: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 23 22:30:33.718: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 23 22:30:33.718: INFO: update-demo-nautilus-tr85f is verified up and running +STEP: using delete to clean up resources +Jun 23 22:30:33.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.855: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:30:33.855: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" +Jun 23 22:30:33.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get rc,svc -l name=update-demo --no-headers --namespace=e2e-tests-kubectl-wjwlq' +Jun 23 22:30:33.998: INFO: stderr: "No resources found.\n" +Jun 23 22:30:33.998: INFO: stdout: "" +Jun 23 22:30:33.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -l name=update-demo --namespace=e2e-tests-kubectl-wjwlq -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 23 22:30:34.145: INFO: stderr: "" +Jun 23 22:30:34.146: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:30:34.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-wjwlq" for this suite. +Jun 23 22:30:56.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:30:56.239: INFO: namespace: e2e-tests-kubectl-wjwlq, resource: bindings, ignored listing per whitelist +Jun 23 22:30:56.239: INFO: namespace e2e-tests-kubectl-wjwlq deletion completed in 22.089977336s + +• [SLOW TEST:48.139 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Update Demo + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should scale a replication controller [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:30:56.240: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod pod-subpath-test-downwardapi-lxlp +STEP: Creating a pod to test atomic-volume-subpath +Jun 23 22:30:56.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-lxlp" in namespace "e2e-tests-subpath-twzbg" to be "success or failure" +Jun 23 22:30:56.327: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.740983ms +Jun 23 22:30:58.331: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006433766s +Jun 23 22:31:00.335: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 4.01011366s +Jun 23 22:31:02.338: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 6.014018162s +Jun 23 22:31:04.342: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 8.017649702s +Jun 23 22:31:06.346: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 10.021479067s +Jun 23 22:31:08.350: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 12.025197992s +Jun 23 22:31:10.353: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 14.028781337s +Jun 23 22:31:12.357: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 16.032383848s +Jun 23 22:31:14.361: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 18.036633585s +Jun 23 22:31:16.365: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 20.04033934s +Jun 23 22:31:18.368: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 22.044013394s +Jun 23 22:31:20.372: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Running", Reason="", readiness=false. Elapsed: 24.04772427s +Jun 23 22:31:22.376: INFO: Pod "pod-subpath-test-downwardapi-lxlp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.051436684s +STEP: Saw pod success +Jun 23 22:31:22.376: INFO: Pod "pod-subpath-test-downwardapi-lxlp" satisfied condition "success or failure" +Jun 23 22:31:22.379: INFO: Trying to get logs from node minion pod pod-subpath-test-downwardapi-lxlp container test-container-subpath-downwardapi-lxlp: +STEP: delete the pod +Jun 23 22:31:22.399: INFO: Waiting for pod pod-subpath-test-downwardapi-lxlp to disappear +Jun 23 22:31:22.401: INFO: Pod pod-subpath-test-downwardapi-lxlp no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-lxlp +Jun 23 22:31:22.401: INFO: Deleting pod "pod-subpath-test-downwardapi-lxlp" in namespace "e2e-tests-subpath-twzbg" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:31:22.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-subpath-twzbg" for this suite. +Jun 23 22:31:28.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:31:28.487: INFO: namespace: e2e-tests-subpath-twzbg, resource: bindings, ignored listing per whitelist +Jun 23 22:31:28.500: INFO: namespace e2e-tests-subpath-twzbg deletion completed in 6.0923934s + +• [SLOW TEST:32.260 seconds] +[sig-storage] Subpath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:31:28.500: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Jun 23 22:32:08.630: INFO: For apiserver_request_count: +For apiserver_request_latencies_summary: +For etcd_helper_cache_entry_count: +For etcd_helper_cache_hit_count: +For etcd_helper_cache_miss_count: +For etcd_request_cache_add_latencies_summary: +For etcd_request_cache_get_latencies_summary: +For etcd_request_latencies_summary: +For garbage_collector_attempt_to_delete_queue_latency: + [quantile=0.5] = 12 + [quantile=0.9] = 31 + [quantile=0.99] = 44 +For garbage_collector_attempt_to_delete_work_duration: + [quantile=0.5] = 8093 + [quantile=0.9] = 10735 + [quantile=0.99] = 15711 +For garbage_collector_attempt_to_orphan_queue_latency: + [quantile=0.5] = 8 + [quantile=0.9] = 8 + [quantile=0.99] = 8 +For garbage_collector_attempt_to_orphan_work_duration: + [quantile=0.5] = 57632 + [quantile=0.9] = 57632 + [quantile=0.99] = 57632 +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: + [quantile=0.5] = 6 + [quantile=0.9] = 9 + [quantile=0.99] = 30 +For garbage_collector_graph_changes_work_duration: + [quantile=0.5] = 19 + [quantile=0.9] = 30 + [quantile=0.99] = 50 +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: + [quantile=0.5] = 18 + [quantile=0.9] = 24 + [quantile=0.99] = 47 +For namespace_queue_latency_sum: + [] = 8782 +For namespace_queue_latency_count: + [] = 465 +For namespace_retries: + [] = 471 +For namespace_work_duration: + [quantile=0.5] = 173366 + [quantile=0.9] = 213666 + [quantile=0.99] = 260357 +For namespace_work_duration_sum: + [] = 61383178 +For namespace_work_duration_count: + [] = 465 +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:32:08.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-gc-t4h87" for this suite. +Jun 23 22:32:14.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:32:14.709: INFO: namespace: e2e-tests-gc-t4h87, resource: bindings, ignored listing per whitelist +Jun 23 22:32:14.725: INFO: namespace e2e-tests-gc-t4h87 deletion completed in 6.091052902s + +• [SLOW TEST:46.224 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:32:14.725: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a pod in the namespace +STEP: Waiting for the pod to have running status +STEP: Creating an uninitialized pod in the namespace +Jun 23 22:32:20.840: INFO: error from create uninitialized namespace: +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there are no pods in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:32:44.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-namespaces-sw8xp" for this suite. +Jun 23 22:32:50.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:32:50.953: INFO: namespace: e2e-tests-namespaces-sw8xp, resource: bindings, ignored listing per whitelist +Jun 23 22:32:50.982: INFO: namespace e2e-tests-namespaces-sw8xp deletion completed in 6.092591105s +STEP: Destroying namespace "e2e-tests-nsdeletetest-xwk8f" for this suite. +Jun 23 22:32:50.985: INFO: Namespace e2e-tests-nsdeletetest-xwk8f was already deleted +STEP: Destroying namespace "e2e-tests-nsdeletetest-4qrz4" for this suite. +Jun 23 22:32:56.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:32:57.023: INFO: namespace: e2e-tests-nsdeletetest-4qrz4, resource: bindings, ignored listing per whitelist +Jun 23 22:32:57.085: INFO: namespace e2e-tests-nsdeletetest-4qrz4 deletion completed in 6.100466442s + +• [SLOW TEST:42.360 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all pods are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl logs + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:32:57.085: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134 +STEP: creating an rc +Jun 23 22:32:57.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-45x8k' +Jun 23 22:32:57.399: INFO: stderr: "" +Jun 23 22:32:57.399: INFO: stdout: "replicationcontroller/redis-master created\n" +[It] should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Waiting for Redis master to start. +Jun 23 22:32:58.403: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:32:58.403: INFO: Found 0 / 1 +Jun 23 22:32:59.403: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:32:59.403: INFO: Found 0 / 1 +Jun 23 22:33:00.403: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:33:00.403: INFO: Found 1 / 1 +Jun 23 22:33:00.403: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 23 22:33:00.407: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:33:00.407: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +STEP: checking for a matching strings +Jun 23 22:33:00.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 logs redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k' +Jun 23 22:33:00.563: INFO: stderr: "" +Jun 23 22:33:00.563: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jun 22:32:59.120 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jun 22:32:59.120 # Server started, Redis version 3.2.12\n1:M 23 Jun 22:32:59.120 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jun 22:32:59.120 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log lines +Jun 23 22:33:00.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 log redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k --tail=1' +Jun 23 22:33:00.704: INFO: stderr: "" +Jun 23 22:33:00.704: INFO: stdout: "1:M 23 Jun 22:32:59.120 * The server is now ready to accept connections on port 6379\n" +STEP: limiting log bytes +Jun 23 22:33:00.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 log redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k --limit-bytes=1' +Jun 23 22:33:00.839: INFO: stderr: "" +Jun 23 22:33:00.839: INFO: stdout: " " +STEP: exposing timestamps +Jun 23 22:33:00.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 log redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k --tail=1 --timestamps' +Jun 23 22:33:00.977: INFO: stderr: "" +Jun 23 22:33:00.977: INFO: stdout: "2019-06-23T22:32:59.120744474Z 1:M 23 Jun 22:32:59.120 * The server is now ready to accept connections on port 6379\n" +STEP: restricting to a time range +Jun 23 22:33:03.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 log redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k --since=1s' +Jun 23 22:33:03.635: INFO: stderr: "" +Jun 23 22:33:03.635: INFO: stdout: "" +Jun 23 22:33:03.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 log redis-master-xrn5v redis-master --namespace=e2e-tests-kubectl-45x8k --since=24h' +Jun 23 22:33:03.781: INFO: stderr: "" +Jun 23 22:33:03.781: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Jun 22:32:59.120 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Jun 22:32:59.120 # Server started, Redis version 3.2.12\n1:M 23 Jun 22:32:59.120 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Jun 22:32:59.120 * The server is now ready to accept connections on port 6379\n" +[AfterEach] [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1140 +STEP: using delete to clean up resources +Jun 23 22:33:03.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-45x8k' +Jun 23 22:33:03.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 22:33:03.924: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" +Jun 23 22:33:03.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get rc,svc -l name=nginx --no-headers --namespace=e2e-tests-kubectl-45x8k' +Jun 23 22:33:04.062: INFO: stderr: "No resources found.\n" +Jun 23 22:33:04.062: INFO: stdout: "" +Jun 23 22:33:04.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -l name=nginx --namespace=e2e-tests-kubectl-45x8k -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 23 22:33:04.216: INFO: stderr: "" +Jun 23 22:33:04.216: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:33:04.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-45x8k" for this suite. +Jun 23 22:33:26.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:33:26.241: INFO: namespace: e2e-tests-kubectl-45x8k, resource: bindings, ignored listing per whitelist +Jun 23 22:33:26.312: INFO: namespace e2e-tests-kubectl-45x8k deletion completed in 22.091538348s + +• [SLOW TEST:29.227 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl logs + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be able to retrieve and filter logs [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:33:26.312: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 23 22:33:26.382: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 23 22:33:26.389: INFO: Waiting for terminating namespaces to be deleted... +Jun 23 22:33:26.392: INFO: +Logging pods the kubelet thinks is on node minion before test +Jun 23 22:33:26.402: INFO: weave-scope-agent-97sw9 from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container agent ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: weave-net-6ckzc from kube-system started at 2019-06-23 21:00:31 +0000 UTC (2 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container weave ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: Container weave-npc ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: weave-scope-app-554f7c7d88-5gkst from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container app ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: sonobuoy-systemd-logs-daemon-set-ad4137666e344d9a-fn99n from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 23 22:33:26.402: INFO: Container systemd-logs ready: true, restart count 1 +Jun 23 22:33:26.402: INFO: nodelocaldns-dfk9g from kube-system started at 2019-06-23 21:01:14 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container node-cache ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: kubernetes-dashboard-7f5cd8fd66-hc5vw from kube-system started at 2019-06-23 21:01:17 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-23 21:11:45 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: kube-proxy-vhhgh from kube-system started at 2019-06-23 21:00:40 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container kube-proxy ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: coredns-f9d858bbd-xfbr4 from kube-system started at 2019-06-23 21:01:13 +0000 UTC (1 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container coredns ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: sonobuoy-e2e-job-b3c813a489584c2d from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded) +Jun 23 22:33:26.402: INFO: Container e2e ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 23 22:33:26.402: INFO: nginx-proxy-minion from kube-system started at (0 container statuses recorded) +[It] validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: verifying the node has the label node minion +Jun 23 22:33:26.428: INFO: Pod sonobuoy requesting resource cpu=0m on Node minion +Jun 23 22:33:26.428: INFO: Pod sonobuoy-e2e-job-b3c813a489584c2d requesting resource cpu=0m on Node minion +Jun 23 22:33:26.428: INFO: Pod sonobuoy-systemd-logs-daemon-set-ad4137666e344d9a-fn99n requesting resource cpu=0m on Node minion +Jun 23 22:33:26.428: INFO: Pod coredns-f9d858bbd-xfbr4 requesting resource cpu=100m on Node minion +Jun 23 22:33:26.428: INFO: Pod kube-proxy-vhhgh requesting resource cpu=0m on Node minion +Jun 23 22:33:26.428: INFO: Pod kubernetes-dashboard-7f5cd8fd66-hc5vw requesting resource cpu=50m on Node minion +Jun 23 22:33:26.428: INFO: Pod nginx-proxy-minion requesting resource cpu=25m on Node minion +Jun 23 22:33:26.428: INFO: Pod nodelocaldns-dfk9g requesting resource cpu=100m on Node minion +Jun 23 22:33:26.428: INFO: Pod weave-net-6ckzc requesting resource cpu=20m on Node minion +Jun 23 22:33:26.428: INFO: Pod weave-scope-agent-97sw9 requesting resource cpu=0m on Node minion +Jun 23 22:33:26.428: INFO: Pod weave-scope-app-554f7c7d88-5gkst requesting resource cpu=0m on Node minion +STEP: Starting Pods to consume most of the cluster CPU. +STEP: Creating another pod that requires unavailable amount of CPU. +STEP: Considering event: +Type = [Normal], Name = [filler-pod-eac5711b-9606-11e9-9086-ba438756bc32.15aaf4a4179959e2], Reason = [Scheduled], Message = [Successfully assigned e2e-tests-sched-pred-bl74z/filler-pod-eac5711b-9606-11e9-9086-ba438756bc32 to minion] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-eac5711b-9606-11e9-9086-ba438756bc32.15aaf4a46a541e55], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-eac5711b-9606-11e9-9086-ba438756bc32.15aaf4a46f75627d], Reason = [Created], Message = [Created container] +STEP: Considering event: +Type = [Normal], Name = [filler-pod-eac5711b-9606-11e9-9086-ba438756bc32.15aaf4a4822cf116], Reason = [Started], Message = [Started container] +STEP: Considering event: +Type = [Warning], Name = [additional-pod.15aaf4a506b7c492], Reason = [FailedScheduling], Message = [0/2 nodes are available: 1 Insufficient cpu, 1 node(s) had taints that the pod didn't tolerate.] +STEP: removing the label node off the node minion +STEP: verifying the node doesn't have the label node +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:33:31.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-sched-pred-bl74z" for this suite. +Jun 23 22:33:37.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:33:37.534: INFO: namespace: e2e-tests-sched-pred-bl74z, resource: bindings, ignored listing per whitelist +Jun 23 22:33:37.568: INFO: namespace e2e-tests-sched-pred-bl74z deletion completed in 6.095407769s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:11.256 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates resource limits of pods that are allowed to run [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:33:37.568: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod pod-subpath-test-configmap-8ttq +STEP: Creating a pod to test atomic-volume-subpath +Jun 23 22:33:37.660: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8ttq" in namespace "e2e-tests-subpath-4t5xh" to be "success or failure" +Jun 23 22:33:37.663: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.67086ms +Jun 23 22:33:39.667: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006245289s +Jun 23 22:33:41.670: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 4.009847911s +Jun 23 22:33:43.674: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 6.01352182s +Jun 23 22:33:45.678: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 8.017100614s +Jun 23 22:33:47.681: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 10.020692096s +Jun 23 22:33:49.685: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 12.024121843s +Jun 23 22:33:51.688: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 14.027610595s +Jun 23 22:33:53.692: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 16.03125479s +Jun 23 22:33:55.695: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 18.034970391s +Jun 23 22:33:57.699: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 20.038583687s +Jun 23 22:33:59.703: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Running", Reason="", readiness=false. Elapsed: 22.042009903s +Jun 23 22:34:01.706: INFO: Pod "pod-subpath-test-configmap-8ttq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.045746787s +STEP: Saw pod success +Jun 23 22:34:01.706: INFO: Pod "pod-subpath-test-configmap-8ttq" satisfied condition "success or failure" +Jun 23 22:34:01.709: INFO: Trying to get logs from node minion pod pod-subpath-test-configmap-8ttq container test-container-subpath-configmap-8ttq: +STEP: delete the pod +Jun 23 22:34:01.728: INFO: Waiting for pod pod-subpath-test-configmap-8ttq to disappear +Jun 23 22:34:01.733: INFO: Pod pod-subpath-test-configmap-8ttq no longer exists +STEP: Deleting pod pod-subpath-test-configmap-8ttq +Jun 23 22:34:01.734: INFO: Deleting pod "pod-subpath-test-configmap-8ttq" in namespace "e2e-tests-subpath-4t5xh" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:34:01.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-subpath-4t5xh" for this suite. +Jun 23 22:34:07.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:34:07.779: INFO: namespace: e2e-tests-subpath-4t5xh, resource: bindings, ignored listing per whitelist +Jun 23 22:34:07.830: INFO: namespace e2e-tests-subpath-4t5xh deletion completed in 6.090581682s + +• [SLOW TEST:30.262 seconds] +[sig-storage] Subpath +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with configmap pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl patch + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:34:07.830: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating Redis RC +Jun 23 22:34:07.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-2jdf4' +Jun 23 22:34:08.125: INFO: stderr: "" +Jun 23 22:34:08.125: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 23 22:34:09.129: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:34:09.129: INFO: Found 0 / 1 +Jun 23 22:34:10.129: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:34:10.129: INFO: Found 0 / 1 +Jun 23 22:34:11.129: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:34:11.129: INFO: Found 1 / 1 +Jun 23 22:34:11.129: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +STEP: patching all pods +Jun 23 22:34:11.132: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:34:11.132: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 23 22:34:11.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 patch pod redis-master-g24dt --namespace=e2e-tests-kubectl-2jdf4 -p {"metadata":{"annotations":{"x":"y"}}}' +Jun 23 22:34:11.257: INFO: stderr: "" +Jun 23 22:34:11.257: INFO: stdout: "pod/redis-master-g24dt patched\n" +STEP: checking annotations +Jun 23 22:34:11.261: INFO: Selector matched 1 pods for map[app:redis] +Jun 23 22:34:11.261: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:34:11.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-2jdf4" for this suite. +Jun 23 22:34:41.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:34:41.344: INFO: namespace: e2e-tests-kubectl-2jdf4, resource: bindings, ignored listing per whitelist +Jun 23 22:34:41.356: INFO: namespace e2e-tests-kubectl-2jdf4 deletion completed in 30.091316347s + +• [SLOW TEST:33.526 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl patch + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should add annotations for pods in rc [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:34:41.357: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test substitution in container's command +Jun 23 22:34:41.433: INFO: Waiting up to 5m0s for pod "var-expansion-17793631-9607-11e9-9086-ba438756bc32" in namespace "e2e-tests-var-expansion-4bhh5" to be "success or failure" +Jun 23 22:34:41.435: INFO: Pod "var-expansion-17793631-9607-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.777791ms +Jun 23 22:34:43.439: INFO: Pod "var-expansion-17793631-9607-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006332472s +Jun 23 22:34:45.442: INFO: Pod "var-expansion-17793631-9607-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009486641s +STEP: Saw pod success +Jun 23 22:34:45.442: INFO: Pod "var-expansion-17793631-9607-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:34:45.445: INFO: Trying to get logs from node minion pod var-expansion-17793631-9607-11e9-9086-ba438756bc32 container dapi-container: +STEP: delete the pod +Jun 23 22:34:45.461: INFO: Waiting for pod var-expansion-17793631-9607-11e9-9086-ba438756bc32 to disappear +Jun 23 22:34:45.464: INFO: Pod var-expansion-17793631-9607-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:34:45.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-var-expansion-4bhh5" for this suite. +Jun 23 22:34:51.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:34:51.547: INFO: namespace: e2e-tests-var-expansion-4bhh5, resource: bindings, ignored listing per whitelist +Jun 23 22:34:51.560: INFO: namespace e2e-tests-var-expansion-4bhh5 deletion completed in 6.092025982s + +• [SLOW TEST:10.203 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:34:51.560: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Jun 23 22:34:51.816: INFO: Pod name wrapped-volume-race-1da8a6c3-9607-11e9-9086-ba438756bc32: Found 0 pods out of 5 +Jun 23 22:34:56.823: INFO: Pod name wrapped-volume-race-1da8a6c3-9607-11e9-9086-ba438756bc32: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-1da8a6c3-9607-11e9-9086-ba438756bc32 in namespace e2e-tests-emptydir-wrapper-xpzdr, will wait for the garbage collector to delete the pods +Jun 23 22:36:42.905: INFO: Deleting ReplicationController wrapped-volume-race-1da8a6c3-9607-11e9-9086-ba438756bc32 took: 6.868748ms +Jun 23 22:36:43.006: INFO: Terminating ReplicationController wrapped-volume-race-1da8a6c3-9607-11e9-9086-ba438756bc32 pods took: 100.180354ms +STEP: Creating RC which spawns configmap-volume pods +Jun 23 22:37:23.821: INFO: Pod name wrapped-volume-race-78425a13-9607-11e9-9086-ba438756bc32: Found 0 pods out of 5 +Jun 23 22:37:28.829: INFO: Pod name wrapped-volume-race-78425a13-9607-11e9-9086-ba438756bc32: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-78425a13-9607-11e9-9086-ba438756bc32 in namespace e2e-tests-emptydir-wrapper-xpzdr, will wait for the garbage collector to delete the pods +Jun 23 22:39:14.915: INFO: Deleting ReplicationController wrapped-volume-race-78425a13-9607-11e9-9086-ba438756bc32 took: 7.171202ms +Jun 23 22:39:15.016: INFO: Terminating ReplicationController wrapped-volume-race-78425a13-9607-11e9-9086-ba438756bc32 pods took: 100.205767ms +STEP: Creating RC which spawns configmap-volume pods +Jun 23 22:39:54.831: INFO: Pod name wrapped-volume-race-d244acf2-9607-11e9-9086-ba438756bc32: Found 0 pods out of 5 +Jun 23 22:39:59.839: INFO: Pod name wrapped-volume-race-d244acf2-9607-11e9-9086-ba438756bc32: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-d244acf2-9607-11e9-9086-ba438756bc32 in namespace e2e-tests-emptydir-wrapper-xpzdr, will wait for the garbage collector to delete the pods +Jun 23 22:42:27.926: INFO: Deleting ReplicationController wrapped-volume-race-d244acf2-9607-11e9-9086-ba438756bc32 took: 7.12136ms +Jun 23 22:42:28.027: INFO: Terminating ReplicationController wrapped-volume-race-d244acf2-9607-11e9-9086-ba438756bc32 pods took: 100.208302ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:43:05.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-wrapper-xpzdr" for this suite. +Jun 23 22:43:11.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:43:11.419: INFO: namespace: e2e-tests-emptydir-wrapper-xpzdr, resource: bindings, ignored listing per whitelist +Jun 23 22:43:11.464: INFO: namespace e2e-tests-emptydir-wrapper-xpzdr deletion completed in 6.089946665s + +• [SLOW TEST:499.904 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not cause race condition when used for configmaps [Serial] [Slow] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:43:11.464: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: validating api versions +Jun 23 22:43:11.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 api-versions' +Jun 23 22:43:11.652: INFO: stderr: "" +Jun 23 22:43:11.653: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:43:11.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-9dxr2" for this suite. +Jun 23 22:43:17.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:43:17.714: INFO: namespace: e2e-tests-kubectl-9dxr2, resource: bindings, ignored listing per whitelist +Jun 23 22:43:17.753: INFO: namespace e2e-tests-kubectl-9dxr2 deletion completed in 6.096529347s + +• [SLOW TEST:6.289 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl api-versions + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:43:17.753: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:132 +[It] should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 23 22:43:22.347: INFO: Successfully updated pod "pod-update-4b44d9c9-9608-11e9-9086-ba438756bc32" +STEP: verifying the updated pod is in kubernetes +Jun 23 22:43:22.353: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:43:22.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-pods-2ccs6" for this suite. +Jun 23 22:43:44.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:43:44.376: INFO: namespace: e2e-tests-pods-2ccs6, resource: bindings, ignored listing per whitelist +Jun 23 22:43:44.449: INFO: namespace e2e-tests-pods-2ccs6 deletion completed in 22.092258362s + +• [SLOW TEST:26.696 seconds] +[k8s.io] Pods +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:43:44.450: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name projected-configmap-test-volume-map-5b2f22dd-9608-11e9-9086-ba438756bc32 +STEP: Creating a pod to test consume configMaps +Jun 23 22:43:44.533: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-jn8v9" to be "success or failure" +Jun 23 22:43:44.536: INFO: Pod "pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.788312ms +Jun 23 22:43:46.539: INFO: Pod "pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006442071s +Jun 23 22:43:48.543: INFO: Pod "pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010251424s +STEP: Saw pod success +Jun 23 22:43:48.543: INFO: Pod "pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:43:48.546: INFO: Trying to get logs from node minion pod pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32 container projected-configmap-volume-test: +STEP: delete the pod +Jun 23 22:43:48.565: INFO: Waiting for pod pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32 to disappear +Jun 23 22:43:48.568: INFO: Pod pod-projected-configmaps-5b2fb06b-9608-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:43:48.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-jn8v9" for this suite. +Jun 23 22:43:54.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:43:54.631: INFO: namespace: e2e-tests-projected-jn8v9, resource: bindings, ignored listing per whitelist +Jun 23 22:43:54.664: INFO: namespace e2e-tests-projected-jn8v9 deletion completed in 6.092421921s + +• [SLOW TEST:10.214 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + should be consumable from pods in volume with mappings and Item mode set [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:43:54.664: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:43:58.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-vzflx" for this suite. +Jun 23 22:44:04.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:44:04.794: INFO: namespace: e2e-tests-kubelet-test-vzflx, resource: bindings, ignored listing per whitelist +Jun 23 22:44:04.858: INFO: namespace e2e-tests-kubelet-test-vzflx deletion completed in 6.092680567s + +• [SLOW TEST:10.194 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:44:04.858: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Jun 23 22:44:04.937: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17064,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 23 22:44:04.937: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17064,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Jun 23 22:44:14.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17078,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 23 22:44:14.944: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17078,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Jun 23 22:44:24.951: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17092,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 23 22:44:24.951: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17092,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Jun 23 22:44:34.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17106,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 23 22:44:34.958: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-a,UID:675a8a57-9608-11e9-8956-98039b22fc2c,ResourceVersion:17106,Generation:0,CreationTimestamp:2019-06-23 22:44:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Jun 23 22:44:44.964: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-b,UID:7f35cff3-9608-11e9-8956-98039b22fc2c,ResourceVersion:17120,Generation:0,CreationTimestamp:2019-06-23 22:44:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 23 22:44:44.964: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-b,UID:7f35cff3-9608-11e9-8956-98039b22fc2c,ResourceVersion:17120,Generation:0,CreationTimestamp:2019-06-23 22:44:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Jun 23 22:44:54.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-b,UID:7f35cff3-9608-11e9-8956-98039b22fc2c,ResourceVersion:17134,Generation:0,CreationTimestamp:2019-06-23 22:44:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 23 22:44:54.970: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:e2e-tests-watch-2swhc,SelfLink:/api/v1/namespaces/e2e-tests-watch-2swhc/configmaps/e2e-watch-test-configmap-b,UID:7f35cff3-9608-11e9-8956-98039b22fc2c,ResourceVersion:17134,Generation:0,CreationTimestamp:2019-06-23 22:44:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:45:04.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-watch-2swhc" for this suite. +Jun 23 22:45:10.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:45:11.010: INFO: namespace: e2e-tests-watch-2swhc, resource: bindings, ignored listing per whitelist +Jun 23 22:45:11.068: INFO: namespace e2e-tests-watch-2swhc deletion completed in 6.093437587s + +• [SLOW TEST:66.210 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected configMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:45:11.069: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating projection with configMap that has name projected-configmap-test-upd-8ed0dff8-9608-11e9-9086-ba438756bc32 +STEP: Creating the pod +STEP: Updating configmap projected-configmap-test-upd-8ed0dff8-9608-11e9-9086-ba438756bc32 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:46:17.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-6vfqn" for this suite. +Jun 23 22:46:39.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:46:39.603: INFO: namespace: e2e-tests-projected-6vfqn, resource: bindings, ignored listing per whitelist +Jun 23 22:46:39.639: INFO: namespace e2e-tests-projected-6vfqn deletion completed in 22.091636485s + +• [SLOW TEST:88.571 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:46:39.640: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:48 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating pod liveness-exec in namespace e2e-tests-container-probe-ppk5s +Jun 23 22:46:43.723: INFO: Started pod liveness-exec in namespace e2e-tests-container-probe-ppk5s +STEP: checking the pod's current state and verifying that restartCount is present +Jun 23 22:46:43.725: INFO: Initial restart count of pod liveness-exec is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:50:44.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-container-probe-ppk5s" for this suite. +Jun 23 22:50:50.183: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:50:50.235: INFO: namespace: e2e-tests-container-probe-ppk5s, resource: bindings, ignored listing per whitelist +Jun 23 22:50:50.264: INFO: namespace e2e-tests-container-probe-ppk5s deletion completed in 6.090314345s + +• [SLOW TEST:250.624 seconds] +[k8s.io] Probing container +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:50:50.264: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Jun 23 22:50:50.336: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:50:54.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-h5q7k" for this suite. +Jun 23 22:51:00.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:51:00.727: INFO: namespace: e2e-tests-init-container-h5q7k, resource: bindings, ignored listing per whitelist +Jun 23 22:51:00.729: INFO: namespace e2e-tests-init-container-h5q7k deletion completed in 6.091160366s + +• [SLOW TEST:10.465 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:51:00.729: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on tmpfs +Jun 23 22:51:00.805: INFO: Waiting up to 5m0s for pod "pod-5f3991a7-9609-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-fvf6c" to be "success or failure" +Jun 23 22:51:00.808: INFO: Pod "pod-5f3991a7-9609-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.849205ms +Jun 23 22:51:02.812: INFO: Pod "pod-5f3991a7-9609-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006330156s +Jun 23 22:51:04.815: INFO: Pod "pod-5f3991a7-9609-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010079442s +STEP: Saw pod success +Jun 23 22:51:04.815: INFO: Pod "pod-5f3991a7-9609-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:51:04.818: INFO: Trying to get logs from node minion pod pod-5f3991a7-9609-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 22:51:04.838: INFO: Waiting for pod pod-5f3991a7-9609-11e9-9086-ba438756bc32 to disappear +Jun 23 22:51:04.844: INFO: Pod pod-5f3991a7-9609-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:51:04.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-fvf6c" for this suite. +Jun 23 22:51:10.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:51:10.906: INFO: namespace: e2e-tests-emptydir-fvf6c, resource: bindings, ignored listing per whitelist +Jun 23 22:51:10.938: INFO: namespace e2e-tests-emptydir-fvf6c deletion completed in 6.091035784s + +• [SLOW TEST:10.209 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (root,0644,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:51:10.939: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Jun 23 22:51:11.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-l45vr" to be "success or failure" +Jun 23 22:51:11.019: INFO: Pod "downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.881662ms +Jun 23 22:51:13.023: INFO: Pod "downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006944591s +Jun 23 22:51:15.026: INFO: Pod "downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010668518s +STEP: Saw pod success +Jun 23 22:51:15.026: INFO: Pod "downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:51:15.029: INFO: Trying to get logs from node minion pod downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32 container client-container: +STEP: delete the pod +Jun 23 22:51:15.048: INFO: Waiting for pod downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32 to disappear +Jun 23 22:51:15.050: INFO: Pod downwardapi-volume-654f8d63-9609-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:51:15.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-l45vr" for this suite. +Jun 23 22:51:21.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:51:21.075: INFO: namespace: e2e-tests-downward-api-l45vr, resource: bindings, ignored listing per whitelist +Jun 23 22:51:21.151: INFO: namespace e2e-tests-downward-api-l45vr deletion completed in 6.09667232s + +• [SLOW TEST:10.212 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:51:21.151: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1563 +[It] should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 23 22:51:21.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gq9ht' +Jun 23 22:51:21.835: INFO: stderr: "" +Jun 23 22:51:21.835: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod is running +STEP: verifying the pod e2e-test-nginx-pod was created +Jun 23 22:51:26.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pod e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gq9ht -o json' +Jun 23 22:51:27.003: INFO: stderr: "" +Jun 23 22:51:27.003: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-06-23T22:51:21Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"e2e-tests-kubectl-gq9ht\",\n \"resourceVersion\": \"17794\",\n \"selfLink\": \"/api/v1/namespaces/e2e-tests-kubectl-gq9ht/pods/e2e-test-nginx-pod\",\n \"uid\": \"6bbfde9a-9609-11e9-8956-98039b22fc2c\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bjck7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"minion\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bjck7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bjck7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-23T22:51:21Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-23T22:51:24Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-23T22:51:24Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-23T22:51:21Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://eae3c7d5668e0b86305a40e73a5be73493908eecf35d987f821891e734b408e1\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-06-23T22:51:23Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.197.149.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.251.128.6\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-06-23T22:51:21Z\"\n }\n}\n" +STEP: replace the image in the pod +Jun 23 22:51:27.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 replace -f - --namespace=e2e-tests-kubectl-gq9ht' +Jun 23 22:51:27.246: INFO: stderr: "" +Jun 23 22:51:27.246: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" +STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 +[AfterEach] [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1568 +Jun 23 22:51:27.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete pods e2e-test-nginx-pod --namespace=e2e-tests-kubectl-gq9ht' +Jun 23 22:51:33.769: INFO: stderr: "" +Jun 23 22:51:33.769: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:51:33.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-gq9ht" for this suite. +Jun 23 22:51:39.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:51:39.798: INFO: namespace: e2e-tests-kubectl-gq9ht, resource: bindings, ignored listing per whitelist +Jun 23 22:51:39.864: INFO: namespace e2e-tests-kubectl-gq9ht deletion completed in 6.089986453s + +• [SLOW TEST:18.713 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl replace + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should update a single-container pod's image [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:51:39.864: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating the pod +Jun 23 22:51:39.937: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:51:45.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-init-container-rmvgr" for this suite. +Jun 23 22:52:07.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:52:07.907: INFO: namespace: e2e-tests-init-container-rmvgr, resource: bindings, ignored listing per whitelist +Jun 23 22:52:07.925: INFO: namespace e2e-tests-init-container-rmvgr deletion completed in 22.090779898s + +• [SLOW TEST:28.060 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:52:07.925: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace e2e-tests-statefulset-2dlww +[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Initializing watcher for selector baz=blah,foo=bar +STEP: Creating stateful set ss in namespace e2e-tests-statefulset-2dlww +STEP: Waiting until all stateful set ss replicas will be running in namespace e2e-tests-statefulset-2dlww +Jun 23 22:52:08.008: INFO: Found 0 stateful pods, waiting for 1 +Jun 23 22:52:18.012: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod +Jun 23 22:52:18.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 23 22:52:18.384: INFO: stderr: "" +Jun 23 22:52:18.384: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 23 22:52:18.384: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 23 22:52:18.388: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 23 22:52:18.388: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 23 22:52:18.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999375s +Jun 23 22:52:19.405: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996738181s +Jun 23 22:52:20.409: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992845553s +Jun 23 22:52:21.413: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988969884s +Jun 23 22:52:22.417: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985068254s +Jun 23 22:52:23.421: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.981084307s +Jun 23 22:52:24.425: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.977164386s +Jun 23 22:52:25.428: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.973243252s +Jun 23 22:52:26.432: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.969474408s +Jun 23 22:52:27.437: INFO: Verifying statefulset ss doesn't scale past 1 for another 965.437659ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace e2e-tests-statefulset-2dlww +Jun 23 22:52:28.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:52:28.793: INFO: stderr: "" +Jun 23 22:52:28.793: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 23 22:52:28.793: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 23 22:52:28.797: INFO: Found 1 stateful pods, waiting for 3 +Jun 23 22:52:38.802: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 23 22:52:38.802: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 23 22:52:38.802: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Verifying that stateful set ss was scaled up in order +STEP: Scale down will halt with unhealthy stateful pod +Jun 23 22:52:38.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-0 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 23 22:52:39.149: INFO: stderr: "" +Jun 23 22:52:39.149: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 23 22:52:39.149: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 23 22:52:39.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-1 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 23 22:52:39.503: INFO: stderr: "" +Jun 23 22:52:39.503: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 23 22:52:39.503: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 23 22:52:39.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 23 22:52:39.840: INFO: stderr: "" +Jun 23 22:52:39.840: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 23 22:52:39.840: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 23 22:52:39.840: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 23 22:52:39.844: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 +Jun 23 22:52:49.852: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 23 22:52:49.852: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 23 22:52:49.852: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 23 22:52:49.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999454s +Jun 23 22:52:50.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99609914s +Jun 23 22:52:51.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991653963s +Jun 23 22:52:52.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987332759s +Jun 23 22:52:53.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982732445s +Jun 23 22:52:54.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.978218078s +Jun 23 22:52:55.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.973853575s +Jun 23 22:52:56.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96953101s +Jun 23 22:52:57.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.964925092s +Jun 23 22:52:58.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 960.451885ms +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacee2e-tests-statefulset-2dlww +Jun 23 22:52:59.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-0 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:00.275: INFO: stderr: "" +Jun 23 22:53:00.275: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 23 22:53:00.275: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 23 22:53:00.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-1 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:00.632: INFO: stderr: "" +Jun 23 22:53:00.632: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 23 22:53:00.633: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 23 22:53:00.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:00.981: INFO: rc: 126 +Jun 23 22:53:00.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "process_linux.go:86: executing setns process caused \"exit status 21\"": unknown + command terminated with exit code 126 + [] 0xc001bdaba0 exit status 126 true [0xc001a6a730 0xc001a6a748 0xc001a6a760] [0xc001a6a730 0xc001a6a748 0xc001a6a760] [0xc001a6a740 0xc001a6a758] [0x92f8e0 0x92f8e0] 0xc0020ec900 }: +Command stdout: +OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "process_linux.go:86: executing setns process caused \"exit status 21\"": unknown + +stderr: +command terminated with exit code 126 + +error: +exit status 126 + +Jun 23 22:53:10.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:11.190: INFO: rc: 1 +Jun 23 22:53:11.190: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") + [] 0xc0027df6b0 exit status 1 true [0xc002a40300 0xc002a40318 0xc002a40330] [0xc002a40300 0xc002a40318 0xc002a40330] [0xc002a40310 0xc002a40328] [0x92f8e0 0x92f8e0] 0xc001de29c0 }: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("nginx") + +error: +exit status 1 + +Jun 23 22:53:21.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:21.321: INFO: rc: 1 +Jun 23 22:53:21.321: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc00263f5f0 exit status 1 true [0xc00000f380 0xc00000f3c8 0xc00000f400] [0xc00000f380 0xc00000f3c8 0xc00000f400] [0xc00000f3c0 0xc00000f3f0] [0x92f8e0 0x92f8e0] 0xc002179260 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:53:31.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:31.452: INFO: rc: 1 +Jun 23 22:53:31.452: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc00263f9b0 exit status 1 true [0xc00000f410 0xc00000f470 0xc00000f4b8] [0xc00000f410 0xc00000f470 0xc00000f4b8] [0xc00000f458 0xc00000f4a8] [0x92f8e0 0x92f8e0] 0xc002179560 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:53:41.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:41.583: INFO: rc: 1 +Jun 23 22:53:41.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002853ef0 exit status 1 true [0xc001b48998 0xc001b489b0 0xc001b489c8] [0xc001b48998 0xc001b489b0 0xc001b489c8] [0xc001b489a8 0xc001b489c0] [0x92f8e0 0x92f8e0] 0xc0021ad560 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:53:51.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:53:51.701: INFO: rc: 1 +Jun 23 22:53:51.701: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000916a50 exit status 1 true [0xc0004e2128 0xc00000e190 0xc00000e210] [0xc0004e2128 0xc00000e190 0xc00000e210] [0xc00000e148 0xc00000e1f8] [0x92f8e0 0x92f8e0] 0xc001d26480 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:01.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:01.823: INFO: rc: 1 +Jun 23 22:54:01.823: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000916f00 exit status 1 true [0xc00000e230 0xc00000e250 0xc00000e2b8] [0xc00000e230 0xc00000e250 0xc00000e2b8] [0xc00000e240 0xc00000e2a8] [0x92f8e0 0x92f8e0] 0xc001d26b40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:11.940: INFO: rc: 1 +Jun 23 22:54:11.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000bda570 exit status 1 true [0xc001b26000 0xc001b26018 0xc001b26030] [0xc001b26000 0xc001b26018 0xc001b26030] [0xc001b26010 0xc001b26028] [0x92f8e0 0x92f8e0] 0xc001b04240 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:21.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:22.073: INFO: rc: 1 +Jun 23 22:54:22.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000dae720 exit status 1 true [0xc0001a6000 0xc0001a7068 0xc0001a7130] [0xc0001a6000 0xc0001a7068 0xc0001a7130] [0xc0001a7058 0xc0001a70b8] [0x92f8e0 0x92f8e0] 0xc0020c6480 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:32.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:32.219: INFO: rc: 1 +Jun 23 22:54:32.219: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002686450 exit status 1 true [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc000 0xc0020dc018 0xc0020dc030] [0xc0020dc010 0xc0020dc028] [0x92f8e0 0x92f8e0] 0xc0020546c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:42.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:42.351: INFO: rc: 1 +Jun 23 22:54:42.351: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000917290 exit status 1 true [0xc00000e308 0xc00000e4a8 0xc00000e558] [0xc00000e308 0xc00000e4a8 0xc00000e558] [0xc00000e3c8 0xc00000e518] [0x92f8e0 0x92f8e0] 0xc001d27200 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:54:52.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:54:52.479: INFO: rc: 1 +Jun 23 22:54:52.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000bda990 exit status 1 true [0xc001b26038 0xc001b26050 0xc001b26068] [0xc001b26038 0xc001b26050 0xc001b26068] [0xc001b26048 0xc001b26060] [0x92f8e0 0x92f8e0] 0xc001b045a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:02.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:02.589: INFO: rc: 1 +Jun 23 22:55:02.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000daeba0 exit status 1 true [0xc0001a7178 0xc0001a7230 0xc0001a72b0] [0xc0001a7178 0xc0001a7230 0xc0001a72b0] [0xc0001a71f8 0xc0001a7280] [0x92f8e0 0x92f8e0] 0xc0020c6960 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:12.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:12.719: INFO: rc: 1 +Jun 23 22:55:12.719: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009176e0 exit status 1 true [0xc00000e588 0xc00000e608 0xc00000e640] [0xc00000e588 0xc00000e608 0xc00000e640] [0xc00000e5f0 0xc00000e630] [0x92f8e0 0x92f8e0] 0xc001d27740 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:22.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:22.851: INFO: rc: 1 +Jun 23 22:55:22.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000bdad20 exit status 1 true [0xc001b26070 0xc001b26088 0xc001b260a0] [0xc001b26070 0xc001b26088 0xc001b260a0] [0xc001b26080 0xc001b26098] [0x92f8e0 0x92f8e0] 0xc001b04960 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:32.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:32.980: INFO: rc: 1 +Jun 23 22:55:32.981: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000917ad0 exit status 1 true [0xc00000e6b8 0xc00000e770 0xc00000e7e8] [0xc00000e6b8 0xc00000e770 0xc00000e7e8] [0xc00000e740 0xc00000e798] [0x92f8e0 0x92f8e0] 0xc0024c80c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:42.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:43.111: INFO: rc: 1 +Jun 23 22:55:43.111: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000daf080 exit status 1 true [0xc0001a72b8 0xc0001a72f8 0xc0001a7348] [0xc0001a72b8 0xc0001a72f8 0xc0001a7348] [0xc0001a72e0 0xc0001a7330] [0x92f8e0 0x92f8e0] 0xc0020c6d80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:55:53.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:55:53.246: INFO: rc: 1 +Jun 23 22:55:53.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002686480 exit status 1 true [0xc0004e2128 0xc0020dc010 0xc0020dc028] [0xc0004e2128 0xc0020dc010 0xc0020dc028] [0xc0020dc008 0xc0020dc020] [0x92f8e0 0x92f8e0] 0xc001d26480 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:03.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:03.386: INFO: rc: 1 +Jun 23 22:56:03.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000916ab0 exit status 1 true [0xc00000e050 0xc00000e1b0 0xc00000e230] [0xc00000e050 0xc00000e1b0 0xc00000e230] [0xc00000e190 0xc00000e210] [0x92f8e0 0x92f8e0] 0xc0020546c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:13.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:13.497: INFO: rc: 1 +Jun 23 22:56:13.497: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000bda5a0 exit status 1 true [0xc001b26000 0xc001b26018 0xc001b26030] [0xc001b26000 0xc001b26018 0xc001b26030] [0xc001b26010 0xc001b26028] [0x92f8e0 0x92f8e0] 0xc0024c8ae0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:23.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:23.629: INFO: rc: 1 +Jun 23 22:56:23.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000bda9c0 exit status 1 true [0xc001b26038 0xc001b26050 0xc001b26068] [0xc001b26038 0xc001b26050 0xc001b26068] [0xc001b26048 0xc001b26060] [0x92f8e0 0x92f8e0] 0xc0024c94a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:33.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:33.754: INFO: rc: 1 +Jun 23 22:56:33.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000dae750 exit status 1 true [0xc0001a6000 0xc0001a7068 0xc0001a7130] [0xc0001a6000 0xc0001a7068 0xc0001a7130] [0xc0001a7058 0xc0001a70b8] [0x92f8e0 0x92f8e0] 0xc001b04240 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:43.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:43.875: INFO: rc: 1 +Jun 23 22:56:43.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0026868a0 exit status 1 true [0xc0020dc030 0xc0020dc048 0xc0020dc060] [0xc0020dc030 0xc0020dc048 0xc0020dc060] [0xc0020dc040 0xc0020dc058] [0x92f8e0 0x92f8e0] 0xc001d26b40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:56:53.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:56:53.980: INFO: rc: 1 +Jun 23 22:56:53.980: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002686c30 exit status 1 true [0xc0020dc068 0xc0020dc080 0xc0020dc098] [0xc0020dc068 0xc0020dc080 0xc0020dc098] [0xc0020dc078 0xc0020dc090] [0x92f8e0 0x92f8e0] 0xc001d27200 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:03.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:04.106: INFO: rc: 1 +Jun 23 22:57:04.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000916ff0 exit status 1 true [0xc00000e238 0xc00000e258 0xc00000e308] [0xc00000e238 0xc00000e258 0xc00000e308] [0xc00000e250 0xc00000e2b8] [0x92f8e0 0x92f8e0] 0xc002054b40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:14.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:14.235: INFO: rc: 1 +Jun 23 22:57:14.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002686ff0 exit status 1 true [0xc0020dc0a0 0xc0020dc0b8 0xc0020dc0d0] [0xc0020dc0a0 0xc0020dc0b8 0xc0020dc0d0] [0xc0020dc0b0 0xc0020dc0c8] [0x92f8e0 0x92f8e0] 0xc001d27740 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:24.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:24.361: INFO: rc: 1 +Jun 23 22:57:24.361: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc002687380 exit status 1 true [0xc0020dc0d8 0xc0020dc0f0 0xc0020dc108] [0xc0020dc0d8 0xc0020dc0f0 0xc0020dc108] [0xc0020dc0e8 0xc0020dc100] [0x92f8e0 0x92f8e0] 0xc0020c6000 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:34.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:34.493: INFO: rc: 1 +Jun 23 22:57:34.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc0009173b0 exit status 1 true [0xc00000e310 0xc00000e510 0xc00000e588] [0xc00000e310 0xc00000e510 0xc00000e588] [0xc00000e4a8 0xc00000e558] [0x92f8e0 0x92f8e0] 0xc002054f00 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:44.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:44.628: INFO: rc: 1 +Jun 23 22:57:44.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000917800 exit status 1 true [0xc00000e5e0 0xc00000e610 0xc00000e6b8] [0xc00000e5e0 0xc00000e610 0xc00000e6b8] [0xc00000e608 0xc00000e640] [0x92f8e0 0x92f8e0] 0xc002055200 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:57:54.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:57:54.748: INFO: rc: 1 +Jun 23 22:57:54.748: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found + [] 0xc000dae720 exit status 1 true [0xc0004e2128 0xc0001a7058 0xc0001a70b8] [0xc0004e2128 0xc0001a7058 0xc0001a70b8] [0xc0001a7040 0xc0001a70b0] [0x92f8e0 0x92f8e0] 0xc001d26480 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-2" not found + +error: +exit status 1 + +Jun 23 22:58:04.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 exec --namespace=e2e-tests-statefulset-2dlww ss-2 -- /bin/sh -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 23 22:58:04.864: INFO: rc: 1 +Jun 23 22:58:04.864: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: +Jun 23 22:58:04.864: INFO: Scaling statefulset ss to 0 +STEP: Verifying that stateful set ss was scaled down in reverse order +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 23 22:58:04.876: INFO: Deleting all statefulset in ns e2e-tests-statefulset-2dlww +Jun 23 22:58:04.879: INFO: Scaling statefulset ss to 0 +Jun 23 22:58:04.888: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 23 22:58:04.891: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:04.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-statefulset-2dlww" for this suite. +Jun 23 22:58:10.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:58:10.959: INFO: namespace: e2e-tests-statefulset-2dlww, resource: bindings, ignored listing per whitelist +Jun 23 22:58:11.005: INFO: namespace e2e-tests-statefulset-2dlww deletion completed in 6.099585591s + +• [SLOW TEST:363.079 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:58:11.005: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Given a ReplicationController is created +STEP: When the matched label of one of its pods change +Jun 23 22:58:11.082: INFO: Pod name pod-release: Found 0 pods out of 1 +Jun 23 22:58:16.086: INFO: Pod name pod-release: Found 1 pods out of 1 +STEP: Then the pod is released +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:17.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-replication-controller-pzzfb" for this suite. +Jun 23 22:58:23.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:58:23.198: INFO: namespace: e2e-tests-replication-controller-pzzfb, resource: bindings, ignored listing per whitelist +Jun 23 22:58:23.198: INFO: namespace e2e-tests-replication-controller-pzzfb deletion completed in 6.095052066s + +• [SLOW TEST:12.193 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should release no longer matching pods [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:58:23.198: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward API volume plugin +Jun 23 22:58:23.275: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-projected-mqskl" to be "success or failure" +Jun 23 22:58:23.278: INFO: Pod "downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 3.022108ms +Jun 23 22:58:25.282: INFO: Pod "downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00674314s +Jun 23 22:58:27.285: INFO: Pod "downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010114665s +STEP: Saw pod success +Jun 23 22:58:27.286: INFO: Pod "downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:58:27.288: INFO: Trying to get logs from node minion pod downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32 container client-container: +STEP: delete the pod +Jun 23 22:58:27.307: INFO: Waiting for pod downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 22:58:27.310: INFO: Pod downwardapi-volume-66f53060-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:27.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-mqskl" for this suite. +Jun 23 22:58:33.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:58:33.349: INFO: namespace: e2e-tests-projected-mqskl, resource: bindings, ignored listing per whitelist +Jun 23 22:58:33.407: INFO: namespace e2e-tests-projected-mqskl deletion completed in 6.093658711s + +• [SLOW TEST:10.209 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set mode on item file [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:58:33.408: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 23 22:58:33.485: INFO: Waiting up to 5m0s for pod "pod-6d0b025b-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-gpdsd" to be "success or failure" +Jun 23 22:58:33.487: INFO: Pod "pod-6d0b025b-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.759189ms +Jun 23 22:58:35.491: INFO: Pod "pod-6d0b025b-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006650016s +Jun 23 22:58:37.495: INFO: Pod "pod-6d0b025b-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010094444s +STEP: Saw pod success +Jun 23 22:58:37.495: INFO: Pod "pod-6d0b025b-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:58:37.498: INFO: Trying to get logs from node minion pod pod-6d0b025b-960a-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 22:58:37.515: INFO: Waiting for pod pod-6d0b025b-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 22:58:37.518: INFO: Pod pod-6d0b025b-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:37.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-gpdsd" for this suite. +Jun 23 22:58:43.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:58:43.590: INFO: namespace: e2e-tests-emptydir-gpdsd, resource: bindings, ignored listing per whitelist +Jun 23 22:58:43.616: INFO: namespace e2e-tests-emptydir-gpdsd deletion completed in 6.094213406s + +• [SLOW TEST:10.208 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0777,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:58:43.617: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[It] should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Starting the proxy +Jun 23 22:58:43.688: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-365229432 proxy --unix-socket=/tmp/kubectl-proxy-unix927825015/test' +STEP: retrieving proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:43.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-grgwn" for this suite. +Jun 23 22:58:49.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:58:49.846: INFO: namespace: e2e-tests-kubectl-grgwn, resource: bindings, ignored listing per whitelist +Jun 23 22:58:49.877: INFO: namespace e2e-tests-kubectl-grgwn deletion completed in 6.094396804s + +• [SLOW TEST:6.260 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Proxy server + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should support --unix-socket=/path [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:58:49.877: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating the pod +Jun 23 22:58:54.481: INFO: Successfully updated pod "labelsupdate76dbe582-960a-11e9-9086-ba438756bc32" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:58:56.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-projected-f8ncn" for this suite. +Jun 23 22:59:18.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:59:18.569: INFO: namespace: e2e-tests-projected-f8ncn, resource: bindings, ignored listing per whitelist +Jun 23 22:59:18.609: INFO: namespace e2e-tests-projected-f8ncn deletion completed in 22.097057267s + +• [SLOW TEST:28.732 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-api-machinery] Secrets + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:59:18.609: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating secret e2e-tests-secrets-w5vfz/secret-test-87fe7cf0-960a-11e9-9086-ba438756bc32 +STEP: Creating a pod to test consume secrets +Jun 23 22:59:18.709: INFO: Waiting up to 5m0s for pod "pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-secrets-w5vfz" to be "success or failure" +Jun 23 22:59:18.712: INFO: Pod "pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627254ms +Jun 23 22:59:20.716: INFO: Pod "pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00615185s +Jun 23 22:59:22.719: INFO: Pod "pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009595823s +STEP: Saw pod success +Jun 23 22:59:22.719: INFO: Pod "pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:59:22.722: INFO: Trying to get logs from node minion pod pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32 container env-test: +STEP: delete the pod +Jun 23 22:59:22.740: INFO: Waiting for pod pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 22:59:22.742: INFO: Pod pod-configmaps-87ffc897-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-api-machinery] Secrets + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:59:22.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-secrets-w5vfz" for this suite. +Jun 23 22:59:28.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:59:28.823: INFO: namespace: e2e-tests-secrets-w5vfz, resource: bindings, ignored listing per whitelist +Jun 23 22:59:28.837: INFO: namespace e2e-tests-secrets-w5vfz deletion completed in 6.090893012s + +• [SLOW TEST:10.228 seconds] +[sig-api-machinery] Secrets +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:59:28.837: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Jun 23 22:59:28.914: INFO: Waiting up to 5m0s for pod "downward-api-8e14da43-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-jpnpt" to be "success or failure" +Jun 23 22:59:28.916: INFO: Pod "downward-api-8e14da43-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.650404ms +Jun 23 22:59:30.920: INFO: Pod "downward-api-8e14da43-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006374299s +Jun 23 22:59:32.924: INFO: Pod "downward-api-8e14da43-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010182429s +STEP: Saw pod success +Jun 23 22:59:32.924: INFO: Pod "downward-api-8e14da43-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 22:59:32.927: INFO: Trying to get logs from node minion pod downward-api-8e14da43-960a-11e9-9086-ba438756bc32 container dapi-container: +STEP: delete the pod +Jun 23 22:59:32.945: INFO: Waiting for pod downward-api-8e14da43-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 22:59:32.950: INFO: Pod downward-api-8e14da43-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:59:32.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-jpnpt" for this suite. +Jun 23 22:59:38.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 22:59:39.025: INFO: namespace: e2e-tests-downward-api-jpnpt, resource: bindings, ignored listing per whitelist +Jun 23 22:59:39.043: INFO: namespace e2e-tests-downward-api-jpnpt deletion completed in 6.089382722s + +• [SLOW TEST:10.206 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 22:59:39.043: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating the pod +Jun 23 22:59:43.647: INFO: Successfully updated pod "labelsupdate942a2ba1-960a-11e9-9086-ba438756bc32" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 22:59:45.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-6lzf9" for this suite. +Jun 23 23:00:07.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:00:07.731: INFO: namespace: e2e-tests-downward-api-6lzf9, resource: bindings, ignored listing per whitelist +Jun 23 23:00:07.759: INFO: namespace e2e-tests-downward-api-6lzf9 deletion completed in 22.088787303s + +• [SLOW TEST:28.716 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:00:07.760: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 23 23:00:07.835: INFO: Waiting up to 5m0s for pod "pod-a547d782-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-w55vv" to be "success or failure" +Jun 23 23:00:07.838: INFO: Pod "pod-a547d782-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.519703ms +Jun 23 23:00:09.841: INFO: Pod "pod-a547d782-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005861999s +Jun 23 23:00:11.845: INFO: Pod "pod-a547d782-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009399999s +STEP: Saw pod success +Jun 23 23:00:11.845: INFO: Pod "pod-a547d782-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 23:00:11.848: INFO: Trying to get logs from node minion pod pod-a547d782-960a-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 23:00:11.865: INFO: Waiting for pod pod-a547d782-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 23:00:11.868: INFO: Pod pod-a547d782-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:00:11.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-w55vv" for this suite. +Jun 23 23:00:17.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:00:17.901: INFO: namespace: e2e-tests-emptydir-w55vv, resource: bindings, ignored listing per whitelist +Jun 23 23:00:17.963: INFO: namespace e2e-tests-emptydir-w55vv deletion completed in 6.090790263s + +• [SLOW TEST:10.203 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0644,default) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:00:17.963: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test emptydir 0666 on tmpfs +Jun 23 23:00:18.041: INFO: Waiting up to 5m0s for pod "pod-ab5d15a3-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-emptydir-z4pwp" to be "success or failure" +Jun 23 23:00:18.044: INFO: Pod "pod-ab5d15a3-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700936ms +Jun 23 23:00:20.048: INFO: Pod "pod-ab5d15a3-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006195805s +Jun 23 23:00:22.051: INFO: Pod "pod-ab5d15a3-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00977905s +STEP: Saw pod success +Jun 23 23:00:22.051: INFO: Pod "pod-ab5d15a3-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 23:00:22.054: INFO: Trying to get logs from node minion pod pod-ab5d15a3-960a-11e9-9086-ba438756bc32 container test-container: +STEP: delete the pod +Jun 23 23:00:22.072: INFO: Waiting for pod pod-ab5d15a3-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 23:00:22.077: INFO: Pod pod-ab5d15a3-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:00:22.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-emptydir-z4pwp" for this suite. +Jun 23 23:00:28.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:00:28.165: INFO: namespace: e2e-tests-emptydir-z4pwp, resource: bindings, ignored listing per whitelist +Jun 23 23:00:28.173: INFO: namespace e2e-tests-emptydir-z4pwp deletion completed in 6.091409472s + +• [SLOW TEST:10.210 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 + should support (non-root,0666,tmpfs) [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:00:28.173: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +Jun 23 23:00:28.244: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jun 23 23:00:28.250: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jun 23 23:00:33.254: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 23 23:00:33.255: INFO: Creating deployment "test-rolling-update-deployment" +Jun 23 23:00:33.258: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jun 23 23:00:33.265: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jun 23 23:00:35.272: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jun 23 23:00:35.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696927633, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696927633, loc:(*time.Location)(0x7b33b80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696927633, loc:(*time.Location)(0x7b33b80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696927633, loc:(*time.Location)(0x7b33b80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-68b55d7bc6\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 23 23:00:37.279: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 23 23:00:37.288: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:e2e-tests-deployment-jmq88,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jmq88/deployments/test-rolling-update-deployment,UID:b4709056-960a-11e9-8956-98039b22fc2c,ResourceVersion:19069,Generation:1,CreationTimestamp:2019-06-23 23:00:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-23 23:00:33 +0000 UTC 2019-06-23 23:00:33 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-23 23:00:35 +0000 UTC 2019-06-23 23:00:33 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-68b55d7bc6" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 23 23:00:37.292: INFO: New ReplicaSet "test-rolling-update-deployment-68b55d7bc6" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6,GenerateName:,Namespace:e2e-tests-deployment-jmq88,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jmq88/replicasets/test-rolling-update-deployment-68b55d7bc6,UID:b473162d-960a-11e9-8956-98039b22fc2c,ResourceVersion:19060,Generation:1,CreationTimestamp:2019-06-23 23:00:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b4709056-960a-11e9-8956-98039b22fc2c 0xc0022a8387 0xc0022a8388}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 23 23:00:37.292: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jun 23 23:00:37.292: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:e2e-tests-deployment-jmq88,SelfLink:/apis/apps/v1/namespaces/e2e-tests-deployment-jmq88/replicasets/test-rolling-update-controller,UID:b173fd4a-960a-11e9-8956-98039b22fc2c,ResourceVersion:19068,Generation:2,CreationTimestamp:2019-06-23 23:00:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment b4709056-960a-11e9-8956-98039b22fc2c 0xc0022a82c7 0xc0022a82c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 23 23:00:37.296: INFO: Pod "test-rolling-update-deployment-68b55d7bc6-dt9n9" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-68b55d7bc6-dt9n9,GenerateName:test-rolling-update-deployment-68b55d7bc6-,Namespace:e2e-tests-deployment-jmq88,SelfLink:/api/v1/namespaces/e2e-tests-deployment-jmq88/pods/test-rolling-update-deployment-68b55d7bc6-dt9n9,UID:b4739573-960a-11e9-8956-98039b22fc2c,ResourceVersion:19059,Generation:0,CreationTimestamp:2019-06-23 23:00:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 68b55d7bc6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-68b55d7bc6 b473162d-960a-11e9-8956-98039b22fc2c 0xc0022a8f77 0xc0022a8f78}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-p2xhj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p2xhj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p2xhj true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0022a8ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0022a9010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 23:00:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 23:00:35 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 23:00:35 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-23 23:00:33 +0000 UTC }],Message:,Reason:,HostIP:10.197.149.12,PodIP:10.251.128.7,StartTime:2019-06-23 23:00:33 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-23 23:00:34 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://01c2790bef3e01257a5d35cda4fe9cb47e044c644de6a37de8d3cb2b06727786}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:00:37.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-deployment-jmq88" for this suite. +Jun 23 23:00:43.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:00:43.361: INFO: namespace: e2e-tests-deployment-jmq88, resource: bindings, ignored listing per whitelist +Jun 23 23:00:43.395: INFO: namespace e2e-tests-deployment-jmq88 deletion completed in 6.094603095s + +• [SLOW TEST:15.222 seconds] +[sig-apps] Deployment +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:00:43.395: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 23 23:00:43.466: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 23 23:00:43.487: INFO: Waiting for terminating namespaces to be deleted... +Jun 23 23:00:43.490: INFO: +Logging pods the kubelet thinks is on node minion before test +Jun 23 23:00:43.500: INFO: weave-scope-agent-97sw9 from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container agent ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: weave-net-6ckzc from kube-system started at 2019-06-23 21:00:31 +0000 UTC (2 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container weave ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: Container weave-npc ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: weave-scope-app-554f7c7d88-5gkst from weave started at 2019-06-23 21:01:57 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container app ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: sonobuoy-systemd-logs-daemon-set-ad4137666e344d9a-fn99n from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 23 23:00:43.500: INFO: Container systemd-logs ready: true, restart count 1 +Jun 23 23:00:43.500: INFO: nodelocaldns-dfk9g from kube-system started at 2019-06-23 21:01:14 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container node-cache ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: kubernetes-dashboard-7f5cd8fd66-hc5vw from kube-system started at 2019-06-23 21:01:17 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-23 21:11:45 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: kube-proxy-vhhgh from kube-system started at 2019-06-23 21:00:40 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container kube-proxy ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: coredns-f9d858bbd-xfbr4 from kube-system started at 2019-06-23 21:01:13 +0000 UTC (1 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container coredns ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: sonobuoy-e2e-job-b3c813a489584c2d from heptio-sonobuoy started at 2019-06-23 21:11:51 +0000 UTC (2 container statuses recorded) +Jun 23 23:00:43.500: INFO: Container e2e ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: Container sonobuoy-worker ready: true, restart count 0 +Jun 23 23:00:43.500: INFO: nginx-proxy-minion from kube-system started at (0 container statuses recorded) +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-bcf1003a-960a-11e9-9086-ba438756bc32 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-bcf1003a-960a-11e9-9086-ba438756bc32 off the node minion +STEP: verifying the node doesn't have the label kubernetes.io/e2e-bcf1003a-960a-11e9-9086-ba438756bc32 +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:00:51.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-sched-pred-jthrg" for this suite. +Jun 23 23:01:09.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:01:09.631: INFO: namespace: e2e-tests-sched-pred-jthrg, resource: bindings, ignored listing per whitelist +Jun 23 23:01:09.654: INFO: namespace e2e-tests-sched-pred-jthrg deletion completed in 18.091681096s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:26.259 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:01:09.654: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap e2e-tests-configmap-m8vww/configmap-test-ca2c2f22-960a-11e9-9086-ba438756bc32 +STEP: Creating a pod to test consume configMaps +Jun 23 23:01:09.733: INFO: Waiting up to 5m0s for pod "pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-configmap-m8vww" to be "success or failure" +Jun 23 23:01:09.736: INFO: Pod "pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660849ms +Jun 23 23:01:11.740: INFO: Pod "pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006185712s +Jun 23 23:01:13.743: INFO: Pod "pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009859696s +STEP: Saw pod success +Jun 23 23:01:13.743: INFO: Pod "pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 23:01:13.746: INFO: Trying to get logs from node minion pod pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32 container env-test: +STEP: delete the pod +Jun 23 23:01:13.765: INFO: Waiting for pod pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 23:01:13.767: INFO: Pod pod-configmaps-ca2cc589-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:01:13.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-m8vww" for this suite. +Jun 23 23:01:19.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:01:19.805: INFO: namespace: e2e-tests-configmap-m8vww, resource: bindings, ignored listing per whitelist +Jun 23 23:01:19.864: INFO: namespace e2e-tests-configmap-m8vww deletion completed in 6.093126335s + +• [SLOW TEST:10.210 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:01:19.864: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: creating replication controller svc-latency-rc in namespace e2e-tests-svc-latency-9mdsq +I0623 23:01:19.939921 20 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: e2e-tests-svc-latency-9mdsq, replica count: 1 +I0623 23:01:20.990315 20 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0623 23:01:21.990538 20 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0623 23:01:22.990770 20 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 23 23:01:23.098: INFO: Created: latency-svc-9rf95 +Jun 23 23:01:23.109: INFO: Got endpoints: latency-svc-9rf95 [18.854495ms] +Jun 23 23:01:23.117: INFO: Created: latency-svc-548hv +Jun 23 23:01:23.123: INFO: Got endpoints: latency-svc-548hv [13.838656ms] +Jun 23 23:01:23.125: INFO: Created: latency-svc-p569g +Jun 23 23:01:23.132: INFO: Got endpoints: latency-svc-p569g [22.35143ms] +Jun 23 23:01:23.134: INFO: Created: latency-svc-c695v +Jun 23 23:01:23.141: INFO: Got endpoints: latency-svc-c695v [31.24223ms] +Jun 23 23:01:23.143: INFO: Created: latency-svc-gjdtq +Jun 23 23:01:23.149: INFO: Got endpoints: latency-svc-gjdtq [39.793428ms] +Jun 23 23:01:23.151: INFO: Created: latency-svc-x9zv4 +Jun 23 23:01:23.158: INFO: Got endpoints: latency-svc-x9zv4 [48.853283ms] +Jun 23 23:01:23.160: INFO: Created: latency-svc-wtwvd +Jun 23 23:01:23.167: INFO: Got endpoints: latency-svc-wtwvd [57.135198ms] +Jun 23 23:01:23.169: INFO: Created: latency-svc-qvdv2 +Jun 23 23:01:23.175: INFO: Got endpoints: latency-svc-qvdv2 [65.591348ms] +Jun 23 23:01:23.177: INFO: Created: latency-svc-jkhlh +Jun 23 23:01:23.184: INFO: Got endpoints: latency-svc-jkhlh [74.31305ms] +Jun 23 23:01:23.187: INFO: Created: latency-svc-ddxlh +Jun 23 23:01:23.195: INFO: Got endpoints: latency-svc-ddxlh [84.782497ms] +Jun 23 23:01:23.203: INFO: Created: latency-svc-f5pqx +Jun 23 23:01:23.211: INFO: Got endpoints: latency-svc-f5pqx [101.044033ms] +Jun 23 23:01:23.212: INFO: Created: latency-svc-dlhkl +Jun 23 23:01:23.219: INFO: Got endpoints: latency-svc-dlhkl [109.514146ms] +Jun 23 23:01:23.221: INFO: Created: latency-svc-kb9hs +Jun 23 23:01:23.228: INFO: Got endpoints: latency-svc-kb9hs [117.976467ms] +Jun 23 23:01:23.230: INFO: Created: latency-svc-27xd9 +Jun 23 23:01:23.236: INFO: Got endpoints: latency-svc-27xd9 [126.571239ms] +Jun 23 23:01:23.238: INFO: Created: latency-svc-jdwlg +Jun 23 23:01:23.245: INFO: Got endpoints: latency-svc-jdwlg [135.268779ms] +Jun 23 23:01:23.247: INFO: Created: latency-svc-t67jm +Jun 23 23:01:23.255: INFO: Got endpoints: latency-svc-t67jm [145.263324ms] +Jun 23 23:01:23.257: INFO: Created: latency-svc-7s5zx +Jun 23 23:01:23.263: INFO: Got endpoints: latency-svc-7s5zx [139.883634ms] +Jun 23 23:01:23.272: INFO: Created: latency-svc-6629t +Jun 23 23:01:23.280: INFO: Got endpoints: latency-svc-6629t [147.459443ms] +Jun 23 23:01:23.282: INFO: Created: latency-svc-9fcgp +Jun 23 23:01:23.289: INFO: Got endpoints: latency-svc-9fcgp [147.706606ms] +Jun 23 23:01:23.290: INFO: Created: latency-svc-qmp5m +Jun 23 23:01:23.297: INFO: Got endpoints: latency-svc-qmp5m [147.822426ms] +Jun 23 23:01:23.299: INFO: Created: latency-svc-rxsfc +Jun 23 23:01:23.306: INFO: Got endpoints: latency-svc-rxsfc [147.32269ms] +Jun 23 23:01:23.309: INFO: Created: latency-svc-wrxc7 +Jun 23 23:01:23.316: INFO: Got endpoints: latency-svc-wrxc7 [149.254892ms] +Jun 23 23:01:23.318: INFO: Created: latency-svc-qvtkw +Jun 23 23:01:23.325: INFO: Got endpoints: latency-svc-qvtkw [149.337783ms] +Jun 23 23:01:23.327: INFO: Created: latency-svc-dptn8 +Jun 23 23:01:23.333: INFO: Got endpoints: latency-svc-dptn8 [149.341909ms] +Jun 23 23:01:23.335: INFO: Created: latency-svc-5sqr7 +Jun 23 23:01:23.342: INFO: Got endpoints: latency-svc-5sqr7 [147.338871ms] +Jun 23 23:01:23.344: INFO: Created: latency-svc-78wsn +Jun 23 23:01:23.351: INFO: Got endpoints: latency-svc-78wsn [139.785296ms] +Jun 23 23:01:23.352: INFO: Created: latency-svc-jfkll +Jun 23 23:01:23.360: INFO: Got endpoints: latency-svc-jfkll [140.549638ms] +Jun 23 23:01:23.376: INFO: Created: latency-svc-hdfd8 +Jun 23 23:01:23.379: INFO: Got endpoints: latency-svc-hdfd8 [151.287305ms] +Jun 23 23:01:23.387: INFO: Created: latency-svc-qpslx +Jun 23 23:01:23.390: INFO: Got endpoints: latency-svc-qpslx [153.232196ms] +Jun 23 23:01:23.397: INFO: Created: latency-svc-zr489 +Jun 23 23:01:23.406: INFO: Got endpoints: latency-svc-zr489 [160.981222ms] +Jun 23 23:01:23.406: INFO: Created: latency-svc-gqpgg +Jun 23 23:01:23.414: INFO: Got endpoints: latency-svc-gqpgg [158.736704ms] +Jun 23 23:01:23.416: INFO: Created: latency-svc-xrbcm +Jun 23 23:01:23.422: INFO: Got endpoints: latency-svc-xrbcm [159.100613ms] +Jun 23 23:01:23.425: INFO: Created: latency-svc-6fmch +Jun 23 23:01:23.431: INFO: Got endpoints: latency-svc-6fmch [151.796401ms] +Jun 23 23:01:23.440: INFO: Created: latency-svc-bjchf +Jun 23 23:01:23.448: INFO: Got endpoints: latency-svc-bjchf [159.124321ms] +Jun 23 23:01:23.450: INFO: Created: latency-svc-5b5r6 +Jun 23 23:01:23.456: INFO: Got endpoints: latency-svc-5b5r6 [159.298387ms] +Jun 23 23:01:23.458: INFO: Created: latency-svc-c4xjv +Jun 23 23:01:23.465: INFO: Got endpoints: latency-svc-c4xjv [159.032184ms] +Jun 23 23:01:23.467: INFO: Created: latency-svc-tlpfl +Jun 23 23:01:23.473: INFO: Got endpoints: latency-svc-tlpfl [157.318228ms] +Jun 23 23:01:23.475: INFO: Created: latency-svc-khrsw +Jun 23 23:01:23.485: INFO: Created: latency-svc-jlkvf +Jun 23 23:01:23.493: INFO: Created: latency-svc-5bvbw +Jun 23 23:01:23.501: INFO: Got endpoints: latency-svc-khrsw [176.186108ms] +Jun 23 23:01:23.509: INFO: Created: latency-svc-2lhz4 +Jun 23 23:01:23.518: INFO: Created: latency-svc-fm4cl +Jun 23 23:01:23.527: INFO: Created: latency-svc-2cxnm +Jun 23 23:01:23.536: INFO: Created: latency-svc-qzb92 +Jun 23 23:01:23.544: INFO: Created: latency-svc-4fmv5 +Jun 23 23:01:23.551: INFO: Got endpoints: latency-svc-jlkvf [217.697414ms] +Jun 23 23:01:23.553: INFO: Created: latency-svc-khg6w +Jun 23 23:01:23.562: INFO: Created: latency-svc-9kjfp +Jun 23 23:01:23.570: INFO: Created: latency-svc-b9dxj +Jun 23 23:01:23.579: INFO: Created: latency-svc-dtxqt +Jun 23 23:01:23.588: INFO: Created: latency-svc-9p6pg +Jun 23 23:01:23.604: INFO: Got endpoints: latency-svc-5bvbw [261.619111ms] +Jun 23 23:01:23.604: INFO: Created: latency-svc-rwdvn +Jun 23 23:01:23.613: INFO: Created: latency-svc-p4z7f +Jun 23 23:01:23.622: INFO: Created: latency-svc-bfbrc +Jun 23 23:01:23.630: INFO: Created: latency-svc-b692h +Jun 23 23:01:23.639: INFO: Created: latency-svc-xhzfz +Jun 23 23:01:23.654: INFO: Got endpoints: latency-svc-2lhz4 [303.101623ms] +Jun 23 23:01:23.660: INFO: Created: latency-svc-sgqrb +Jun 23 23:01:23.701: INFO: Got endpoints: latency-svc-fm4cl [341.461208ms] +Jun 23 23:01:23.708: INFO: Created: latency-svc-fgmzt +Jun 23 23:01:23.751: INFO: Got endpoints: latency-svc-2cxnm [372.182355ms] +Jun 23 23:01:23.762: INFO: Created: latency-svc-5lt6g +Jun 23 23:01:23.802: INFO: Got endpoints: latency-svc-qzb92 [412.207586ms] +Jun 23 23:01:23.807: INFO: Created: latency-svc-c4whw +Jun 23 23:01:23.851: INFO: Got endpoints: latency-svc-4fmv5 [445.132018ms] +Jun 23 23:01:23.858: INFO: Created: latency-svc-xpcjs +Jun 23 23:01:23.901: INFO: Got endpoints: latency-svc-khg6w [487.680449ms] +Jun 23 23:01:23.911: INFO: Created: latency-svc-tnm9j +Jun 23 23:01:23.951: INFO: Got endpoints: latency-svc-9kjfp [528.998533ms] +Jun 23 23:01:23.959: INFO: Created: latency-svc-67ngh +Jun 23 23:01:24.001: INFO: Got endpoints: latency-svc-b9dxj [569.861877ms] +Jun 23 23:01:24.008: INFO: Created: latency-svc-5s7f5 +Jun 23 23:01:24.051: INFO: Got endpoints: latency-svc-dtxqt [603.423715ms] +Jun 23 23:01:24.060: INFO: Created: latency-svc-mjrxs +Jun 23 23:01:24.101: INFO: Got endpoints: latency-svc-9p6pg [644.815272ms] +Jun 23 23:01:24.108: INFO: Created: latency-svc-9qfcn +Jun 23 23:01:24.152: INFO: Got endpoints: latency-svc-rwdvn [686.748285ms] +Jun 23 23:01:24.159: INFO: Created: latency-svc-jpbqx +Jun 23 23:01:24.202: INFO: Got endpoints: latency-svc-p4z7f [728.570102ms] +Jun 23 23:01:24.208: INFO: Created: latency-svc-5hkxs +Jun 23 23:01:24.251: INFO: Got endpoints: latency-svc-bfbrc [750.520397ms] +Jun 23 23:01:24.258: INFO: Created: latency-svc-ngp44 +Jun 23 23:01:24.301: INFO: Got endpoints: latency-svc-b692h [750.196812ms] +Jun 23 23:01:24.308: INFO: Created: latency-svc-wqqpm +Jun 23 23:01:24.351: INFO: Got endpoints: latency-svc-xhzfz [747.75745ms] +Jun 23 23:01:24.358: INFO: Created: latency-svc-96kxm +Jun 23 23:01:24.403: INFO: Got endpoints: latency-svc-sgqrb [749.78029ms] +Jun 23 23:01:24.410: INFO: Created: latency-svc-8xwn8 +Jun 23 23:01:24.451: INFO: Got endpoints: latency-svc-fgmzt [750.206192ms] +Jun 23 23:01:24.458: INFO: Created: latency-svc-z6bzc +Jun 23 23:01:24.501: INFO: Got endpoints: latency-svc-5lt6g [750.092627ms] +Jun 23 23:01:24.510: INFO: Created: latency-svc-kg54h +Jun 23 23:01:24.551: INFO: Got endpoints: latency-svc-c4whw [749.24614ms] +Jun 23 23:01:24.558: INFO: Created: latency-svc-l5887 +Jun 23 23:01:24.601: INFO: Got endpoints: latency-svc-xpcjs [750.064427ms] +Jun 23 23:01:24.608: INFO: Created: latency-svc-zpt2h +Jun 23 23:01:24.651: INFO: Got endpoints: latency-svc-tnm9j [749.702808ms] +Jun 23 23:01:24.658: INFO: Created: latency-svc-cbsj5 +Jun 23 23:01:24.701: INFO: Got endpoints: latency-svc-67ngh [749.948109ms] +Jun 23 23:01:24.708: INFO: Created: latency-svc-qv8p5 +Jun 23 23:01:24.751: INFO: Got endpoints: latency-svc-5s7f5 [749.992615ms] +Jun 23 23:01:24.759: INFO: Created: latency-svc-7tmcl +Jun 23 23:01:24.801: INFO: Got endpoints: latency-svc-mjrxs [750.15348ms] +Jun 23 23:01:24.808: INFO: Created: latency-svc-p9lsj +Jun 23 23:01:24.851: INFO: Got endpoints: latency-svc-9qfcn [749.81119ms] +Jun 23 23:01:24.858: INFO: Created: latency-svc-d25zg +Jun 23 23:01:24.901: INFO: Got endpoints: latency-svc-jpbqx [749.414048ms] +Jun 23 23:01:24.908: INFO: Created: latency-svc-rb6nr +Jun 23 23:01:24.951: INFO: Got endpoints: latency-svc-5hkxs [749.342958ms] +Jun 23 23:01:24.958: INFO: Created: latency-svc-wltx7 +Jun 23 23:01:25.001: INFO: Got endpoints: latency-svc-ngp44 [749.707363ms] +Jun 23 23:01:25.008: INFO: Created: latency-svc-w8p99 +Jun 23 23:01:25.051: INFO: Got endpoints: latency-svc-wqqpm [749.722035ms] +Jun 23 23:01:25.058: INFO: Created: latency-svc-wvhmk +Jun 23 23:01:25.102: INFO: Got endpoints: latency-svc-96kxm [750.693045ms] +Jun 23 23:01:25.109: INFO: Created: latency-svc-zntjg +Jun 23 23:01:25.151: INFO: Got endpoints: latency-svc-8xwn8 [747.778631ms] +Jun 23 23:01:25.158: INFO: Created: latency-svc-9fpdl +Jun 23 23:01:25.201: INFO: Got endpoints: latency-svc-z6bzc [749.693269ms] +Jun 23 23:01:25.208: INFO: Created: latency-svc-56nnv +Jun 23 23:01:25.251: INFO: Got endpoints: latency-svc-kg54h [749.893467ms] +Jun 23 23:01:25.258: INFO: Created: latency-svc-l5rv6 +Jun 23 23:01:25.301: INFO: Got endpoints: latency-svc-l5887 [750.232858ms] +Jun 23 23:01:25.314: INFO: Created: latency-svc-pnmtn +Jun 23 23:01:25.351: INFO: Got endpoints: latency-svc-zpt2h [749.762716ms] +Jun 23 23:01:25.358: INFO: Created: latency-svc-qg674 +Jun 23 23:01:25.401: INFO: Got endpoints: latency-svc-cbsj5 [749.971405ms] +Jun 23 23:01:25.408: INFO: Created: latency-svc-52lx9 +Jun 23 23:01:25.451: INFO: Got endpoints: latency-svc-qv8p5 [749.982934ms] +Jun 23 23:01:25.459: INFO: Created: latency-svc-dkpzj +Jun 23 23:01:25.501: INFO: Got endpoints: latency-svc-7tmcl [749.705481ms] +Jun 23 23:01:25.510: INFO: Created: latency-svc-cmx8j +Jun 23 23:01:25.551: INFO: Got endpoints: latency-svc-p9lsj [750.041551ms] +Jun 23 23:01:25.558: INFO: Created: latency-svc-7fjgr +Jun 23 23:01:25.601: INFO: Got endpoints: latency-svc-d25zg [749.999828ms] +Jun 23 23:01:25.608: INFO: Created: latency-svc-z47fk +Jun 23 23:01:25.651: INFO: Got endpoints: latency-svc-rb6nr [750.017279ms] +Jun 23 23:01:25.663: INFO: Created: latency-svc-49tj5 +Jun 23 23:01:25.701: INFO: Got endpoints: latency-svc-wltx7 [749.738186ms] +Jun 23 23:01:25.708: INFO: Created: latency-svc-nxkcb +Jun 23 23:01:25.751: INFO: Got endpoints: latency-svc-w8p99 [750.039602ms] +Jun 23 23:01:25.760: INFO: Created: latency-svc-v22f9 +Jun 23 23:01:25.801: INFO: Got endpoints: latency-svc-wvhmk [750.1878ms] +Jun 23 23:01:25.809: INFO: Created: latency-svc-6vv2j +Jun 23 23:01:25.851: INFO: Got endpoints: latency-svc-zntjg [749.249737ms] +Jun 23 23:01:25.858: INFO: Created: latency-svc-p2ksh +Jun 23 23:01:25.901: INFO: Got endpoints: latency-svc-9fpdl [749.845749ms] +Jun 23 23:01:25.910: INFO: Created: latency-svc-r7c5z +Jun 23 23:01:25.951: INFO: Got endpoints: latency-svc-56nnv [749.929824ms] +Jun 23 23:01:25.958: INFO: Created: latency-svc-wv9lv +Jun 23 23:01:26.001: INFO: Got endpoints: latency-svc-l5rv6 [749.637575ms] +Jun 23 23:01:26.008: INFO: Created: latency-svc-h99hf +Jun 23 23:01:26.051: INFO: Got endpoints: latency-svc-pnmtn [749.900411ms] +Jun 23 23:01:26.060: INFO: Created: latency-svc-sc4vb +Jun 23 23:01:26.101: INFO: Got endpoints: latency-svc-qg674 [749.854813ms] +Jun 23 23:01:26.108: INFO: Created: latency-svc-jvs6z +Jun 23 23:01:26.151: INFO: Got endpoints: latency-svc-52lx9 [749.869421ms] +Jun 23 23:01:26.159: INFO: Created: latency-svc-8v6g8 +Jun 23 23:01:26.201: INFO: Got endpoints: latency-svc-dkpzj [749.659095ms] +Jun 23 23:01:26.210: INFO: Created: latency-svc-crbl7 +Jun 23 23:01:26.251: INFO: Got endpoints: latency-svc-cmx8j [750.124461ms] +Jun 23 23:01:26.258: INFO: Created: latency-svc-mgfwd +Jun 23 23:01:26.301: INFO: Got endpoints: latency-svc-7fjgr [749.745365ms] +Jun 23 23:01:26.308: INFO: Created: latency-svc-vlwlf +Jun 23 23:01:26.351: INFO: Got endpoints: latency-svc-z47fk [750.153923ms] +Jun 23 23:01:26.364: INFO: Created: latency-svc-r86x4 +Jun 23 23:01:26.401: INFO: Got endpoints: latency-svc-49tj5 [750.157682ms] +Jun 23 23:01:26.409: INFO: Created: latency-svc-tbpxt +Jun 23 23:01:26.451: INFO: Got endpoints: latency-svc-nxkcb [750.047543ms] +Jun 23 23:01:26.458: INFO: Created: latency-svc-5rbk6 +Jun 23 23:01:26.501: INFO: Got endpoints: latency-svc-v22f9 [750.117652ms] +Jun 23 23:01:26.508: INFO: Created: latency-svc-lcbcz +Jun 23 23:01:26.551: INFO: Got endpoints: latency-svc-6vv2j [749.939372ms] +Jun 23 23:01:26.558: INFO: Created: latency-svc-q46d9 +Jun 23 23:01:26.601: INFO: Got endpoints: latency-svc-p2ksh [749.674169ms] +Jun 23 23:01:26.608: INFO: Created: latency-svc-4j8dt +Jun 23 23:01:26.651: INFO: Got endpoints: latency-svc-r7c5z [750.125711ms] +Jun 23 23:01:26.658: INFO: Created: latency-svc-n2kmg +Jun 23 23:01:26.701: INFO: Got endpoints: latency-svc-wv9lv [749.712893ms] +Jun 23 23:01:26.707: INFO: Created: latency-svc-vwf2p +Jun 23 23:01:26.751: INFO: Got endpoints: latency-svc-h99hf [749.945171ms] +Jun 23 23:01:26.758: INFO: Created: latency-svc-wcx99 +Jun 23 23:01:26.801: INFO: Got endpoints: latency-svc-sc4vb [750.031146ms] +Jun 23 23:01:26.809: INFO: Created: latency-svc-qz8xf +Jun 23 23:01:26.851: INFO: Got endpoints: latency-svc-jvs6z [750.198487ms] +Jun 23 23:01:26.858: INFO: Created: latency-svc-s6nb9 +Jun 23 23:01:26.901: INFO: Got endpoints: latency-svc-8v6g8 [750.031446ms] +Jun 23 23:01:26.908: INFO: Created: latency-svc-gnwg8 +Jun 23 23:01:26.952: INFO: Got endpoints: latency-svc-crbl7 [750.480332ms] +Jun 23 23:01:26.958: INFO: Created: latency-svc-sjl8t +Jun 23 23:01:27.001: INFO: Got endpoints: latency-svc-mgfwd [749.862796ms] +Jun 23 23:01:27.008: INFO: Created: latency-svc-sm62x +Jun 23 23:01:27.051: INFO: Got endpoints: latency-svc-vlwlf [749.850968ms] +Jun 23 23:01:27.058: INFO: Created: latency-svc-766nt +Jun 23 23:01:27.101: INFO: Got endpoints: latency-svc-r86x4 [749.81783ms] +Jun 23 23:01:27.108: INFO: Created: latency-svc-8v4mq +Jun 23 23:01:27.151: INFO: Got endpoints: latency-svc-tbpxt [749.86895ms] +Jun 23 23:01:27.158: INFO: Created: latency-svc-7jmcg +Jun 23 23:01:27.201: INFO: Got endpoints: latency-svc-5rbk6 [749.980676ms] +Jun 23 23:01:27.208: INFO: Created: latency-svc-7thvl +Jun 23 23:01:27.252: INFO: Got endpoints: latency-svc-lcbcz [750.008416ms] +Jun 23 23:01:27.258: INFO: Created: latency-svc-x7n5k +Jun 23 23:01:27.301: INFO: Got endpoints: latency-svc-q46d9 [749.812556ms] +Jun 23 23:01:27.308: INFO: Created: latency-svc-5f5nk +Jun 23 23:01:27.351: INFO: Got endpoints: latency-svc-4j8dt [750.161843ms] +Jun 23 23:01:27.358: INFO: Created: latency-svc-lr5g2 +Jun 23 23:01:27.401: INFO: Got endpoints: latency-svc-n2kmg [749.967601ms] +Jun 23 23:01:27.408: INFO: Created: latency-svc-cnjr2 +Jun 23 23:01:27.451: INFO: Got endpoints: latency-svc-vwf2p [750.358618ms] +Jun 23 23:01:27.458: INFO: Created: latency-svc-gnt29 +Jun 23 23:01:27.502: INFO: Got endpoints: latency-svc-wcx99 [750.42998ms] +Jun 23 23:01:27.508: INFO: Created: latency-svc-mzp46 +Jun 23 23:01:27.555: INFO: Got endpoints: latency-svc-qz8xf [753.029574ms] +Jun 23 23:01:27.561: INFO: Created: latency-svc-xjnfk +Jun 23 23:01:27.601: INFO: Got endpoints: latency-svc-s6nb9 [749.84525ms] +Jun 23 23:01:27.608: INFO: Created: latency-svc-vxcj5 +Jun 23 23:01:27.651: INFO: Got endpoints: latency-svc-gnwg8 [749.966813ms] +Jun 23 23:01:27.665: INFO: Created: latency-svc-z4r6f +Jun 23 23:01:27.701: INFO: Got endpoints: latency-svc-sjl8t [749.59835ms] +Jun 23 23:01:27.708: INFO: Created: latency-svc-xzjnb +Jun 23 23:01:27.751: INFO: Got endpoints: latency-svc-sm62x [749.85013ms] +Jun 23 23:01:27.758: INFO: Created: latency-svc-4sbnl +Jun 23 23:01:27.802: INFO: Got endpoints: latency-svc-766nt [750.390541ms] +Jun 23 23:01:27.808: INFO: Created: latency-svc-8bs5b +Jun 23 23:01:27.851: INFO: Got endpoints: latency-svc-8v4mq [749.85041ms] +Jun 23 23:01:27.858: INFO: Created: latency-svc-6pf6m +Jun 23 23:01:27.901: INFO: Got endpoints: latency-svc-7jmcg [749.758325ms] +Jun 23 23:01:27.908: INFO: Created: latency-svc-mvwsv +Jun 23 23:01:27.952: INFO: Got endpoints: latency-svc-7thvl [750.193043ms] +Jun 23 23:01:27.958: INFO: Created: latency-svc-wdhkm +Jun 23 23:01:28.002: INFO: Got endpoints: latency-svc-x7n5k [750.060372ms] +Jun 23 23:01:28.011: INFO: Created: latency-svc-dr6j9 +Jun 23 23:01:28.051: INFO: Got endpoints: latency-svc-5f5nk [749.876447ms] +Jun 23 23:01:28.058: INFO: Created: latency-svc-5mfhp +Jun 23 23:01:28.102: INFO: Got endpoints: latency-svc-lr5g2 [750.249845ms] +Jun 23 23:01:28.108: INFO: Created: latency-svc-4mb7t +Jun 23 23:01:28.151: INFO: Got endpoints: latency-svc-cnjr2 [749.945379ms] +Jun 23 23:01:28.160: INFO: Created: latency-svc-s5dzc +Jun 23 23:01:28.210: INFO: Got endpoints: latency-svc-gnt29 [758.337592ms] +Jun 23 23:01:28.217: INFO: Created: latency-svc-h542b +Jun 23 23:01:28.251: INFO: Got endpoints: latency-svc-mzp46 [749.667649ms] +Jun 23 23:01:28.258: INFO: Created: latency-svc-ml9hr +Jun 23 23:01:28.302: INFO: Got endpoints: latency-svc-xjnfk [747.022303ms] +Jun 23 23:01:28.327: INFO: Created: latency-svc-lnsvc +Jun 23 23:01:28.351: INFO: Got endpoints: latency-svc-vxcj5 [749.944638ms] +Jun 23 23:01:28.358: INFO: Created: latency-svc-vcq7t +Jun 23 23:01:28.401: INFO: Got endpoints: latency-svc-z4r6f [750.093093ms] +Jun 23 23:01:28.408: INFO: Created: latency-svc-vglx6 +Jun 23 23:01:28.452: INFO: Got endpoints: latency-svc-xzjnb [750.10383ms] +Jun 23 23:01:28.458: INFO: Created: latency-svc-bqntx +Jun 23 23:01:28.502: INFO: Got endpoints: latency-svc-4sbnl [750.35592ms] +Jun 23 23:01:28.508: INFO: Created: latency-svc-zsvzk +Jun 23 23:01:28.551: INFO: Got endpoints: latency-svc-8bs5b [749.7958ms] +Jun 23 23:01:28.558: INFO: Created: latency-svc-szkkn +Jun 23 23:01:28.601: INFO: Got endpoints: latency-svc-6pf6m [750.003957ms] +Jun 23 23:01:28.610: INFO: Created: latency-svc-hqxb5 +Jun 23 23:01:28.654: INFO: Got endpoints: latency-svc-mvwsv [752.876419ms] +Jun 23 23:01:28.661: INFO: Created: latency-svc-j2krx +Jun 23 23:01:28.701: INFO: Got endpoints: latency-svc-wdhkm [749.722286ms] +Jun 23 23:01:28.708: INFO: Created: latency-svc-ngl6z +Jun 23 23:01:28.751: INFO: Got endpoints: latency-svc-dr6j9 [749.750284ms] +Jun 23 23:01:28.772: INFO: Created: latency-svc-kjvg6 +Jun 23 23:01:28.801: INFO: Got endpoints: latency-svc-5mfhp [750.122991ms] +Jun 23 23:01:28.808: INFO: Created: latency-svc-cvrdz +Jun 23 23:01:28.851: INFO: Got endpoints: latency-svc-4mb7t [749.385954ms] +Jun 23 23:01:28.858: INFO: Created: latency-svc-khhml +Jun 23 23:01:28.901: INFO: Got endpoints: latency-svc-s5dzc [750.00408ms] +Jun 23 23:01:28.911: INFO: Created: latency-svc-4stzw +Jun 23 23:01:28.951: INFO: Got endpoints: latency-svc-h542b [741.400862ms] +Jun 23 23:01:28.958: INFO: Created: latency-svc-8rq78 +Jun 23 23:01:29.001: INFO: Got endpoints: latency-svc-ml9hr [749.856743ms] +Jun 23 23:01:29.008: INFO: Created: latency-svc-mm6p2 +Jun 23 23:01:29.051: INFO: Got endpoints: latency-svc-lnsvc [749.531294ms] +Jun 23 23:01:29.060: INFO: Created: latency-svc-bvz9x +Jun 23 23:01:29.101: INFO: Got endpoints: latency-svc-vcq7t [749.938151ms] +Jun 23 23:01:29.108: INFO: Created: latency-svc-vwvs8 +Jun 23 23:01:29.151: INFO: Got endpoints: latency-svc-vglx6 [749.691383ms] +Jun 23 23:01:29.158: INFO: Created: latency-svc-ss85d +Jun 23 23:01:29.205: INFO: Got endpoints: latency-svc-bqntx [752.991704ms] +Jun 23 23:01:29.214: INFO: Created: latency-svc-kbqsl +Jun 23 23:01:29.251: INFO: Got endpoints: latency-svc-zsvzk [749.575934ms] +Jun 23 23:01:29.258: INFO: Created: latency-svc-874qq +Jun 23 23:01:29.301: INFO: Got endpoints: latency-svc-szkkn [749.898064ms] +Jun 23 23:01:29.316: INFO: Created: latency-svc-8jrss +Jun 23 23:01:29.352: INFO: Got endpoints: latency-svc-hqxb5 [750.456265ms] +Jun 23 23:01:29.359: INFO: Created: latency-svc-rpbhr +Jun 23 23:01:29.401: INFO: Got endpoints: latency-svc-j2krx [747.232458ms] +Jun 23 23:01:29.408: INFO: Created: latency-svc-w2ftq +Jun 23 23:01:29.451: INFO: Got endpoints: latency-svc-ngl6z [750.036953ms] +Jun 23 23:01:29.459: INFO: Created: latency-svc-fv8fh +Jun 23 23:01:29.501: INFO: Got endpoints: latency-svc-kjvg6 [749.884021ms] +Jun 23 23:01:29.508: INFO: Created: latency-svc-b5pss +Jun 23 23:01:29.551: INFO: Got endpoints: latency-svc-cvrdz [749.756597ms] +Jun 23 23:01:29.558: INFO: Created: latency-svc-nx2gt +Jun 23 23:01:29.601: INFO: Got endpoints: latency-svc-khhml [750.276979ms] +Jun 23 23:01:29.608: INFO: Created: latency-svc-dqj66 +Jun 23 23:01:29.651: INFO: Got endpoints: latency-svc-4stzw [749.746521ms] +Jun 23 23:01:29.658: INFO: Created: latency-svc-n6dmm +Jun 23 23:01:29.701: INFO: Got endpoints: latency-svc-8rq78 [749.999951ms] +Jun 23 23:01:29.708: INFO: Created: latency-svc-9t6gl +Jun 23 23:01:29.751: INFO: Got endpoints: latency-svc-mm6p2 [750.078194ms] +Jun 23 23:01:29.758: INFO: Created: latency-svc-26zc5 +Jun 23 23:01:29.801: INFO: Got endpoints: latency-svc-bvz9x [750.16712ms] +Jun 23 23:01:29.808: INFO: Created: latency-svc-hcl5v +Jun 23 23:01:29.860: INFO: Got endpoints: latency-svc-vwvs8 [758.637464ms] +Jun 23 23:01:29.866: INFO: Created: latency-svc-28fdr +Jun 23 23:01:29.901: INFO: Got endpoints: latency-svc-ss85d [750.113646ms] +Jun 23 23:01:29.908: INFO: Created: latency-svc-rkwsx +Jun 23 23:01:29.951: INFO: Got endpoints: latency-svc-kbqsl [746.533804ms] +Jun 23 23:01:29.958: INFO: Created: latency-svc-mw9t7 +Jun 23 23:01:30.001: INFO: Got endpoints: latency-svc-874qq [750.10396ms] +Jun 23 23:01:30.008: INFO: Created: latency-svc-6j9mq +Jun 23 23:01:30.052: INFO: Got endpoints: latency-svc-8jrss [750.064163ms] +Jun 23 23:01:30.058: INFO: Created: latency-svc-txkct +Jun 23 23:01:30.101: INFO: Got endpoints: latency-svc-rpbhr [749.503787ms] +Jun 23 23:01:30.108: INFO: Created: latency-svc-b7t9g +Jun 23 23:01:30.151: INFO: Got endpoints: latency-svc-w2ftq [749.793523ms] +Jun 23 23:01:30.158: INFO: Created: latency-svc-wvx4j +Jun 23 23:01:30.202: INFO: Got endpoints: latency-svc-fv8fh [750.075419ms] +Jun 23 23:01:30.208: INFO: Created: latency-svc-5grpr +Jun 23 23:01:30.252: INFO: Got endpoints: latency-svc-b5pss [750.140526ms] +Jun 23 23:01:30.258: INFO: Created: latency-svc-4qjs4 +Jun 23 23:01:30.301: INFO: Got endpoints: latency-svc-nx2gt [750.023572ms] +Jun 23 23:01:30.308: INFO: Created: latency-svc-kp57f +Jun 23 23:01:30.351: INFO: Got endpoints: latency-svc-dqj66 [749.641415ms] +Jun 23 23:01:30.358: INFO: Created: latency-svc-zl58n +Jun 23 23:01:30.409: INFO: Got endpoints: latency-svc-n6dmm [758.09337ms] +Jun 23 23:01:30.418: INFO: Created: latency-svc-4l4dw +Jun 23 23:01:30.451: INFO: Got endpoints: latency-svc-9t6gl [749.938188ms] +Jun 23 23:01:30.458: INFO: Created: latency-svc-qjm7v +Jun 23 23:01:30.501: INFO: Got endpoints: latency-svc-26zc5 [749.824718ms] +Jun 23 23:01:30.508: INFO: Created: latency-svc-4xvd9 +Jun 23 23:01:30.551: INFO: Got endpoints: latency-svc-hcl5v [749.702993ms] +Jun 23 23:01:30.560: INFO: Created: latency-svc-jxzfs +Jun 23 23:01:30.601: INFO: Got endpoints: latency-svc-28fdr [741.284673ms] +Jun 23 23:01:30.608: INFO: Created: latency-svc-zd5lg +Jun 23 23:01:30.651: INFO: Got endpoints: latency-svc-rkwsx [749.934611ms] +Jun 23 23:01:30.658: INFO: Created: latency-svc-nzzk5 +Jun 23 23:01:30.701: INFO: Got endpoints: latency-svc-mw9t7 [750.004478ms] +Jun 23 23:01:30.710: INFO: Created: latency-svc-rh9rs +Jun 23 23:01:30.751: INFO: Got endpoints: latency-svc-6j9mq [749.597753ms] +Jun 23 23:01:30.758: INFO: Created: latency-svc-x2dsd +Jun 23 23:01:30.801: INFO: Got endpoints: latency-svc-txkct [749.604048ms] +Jun 23 23:01:30.808: INFO: Created: latency-svc-pv4bd +Jun 23 23:01:30.851: INFO: Got endpoints: latency-svc-b7t9g [749.675533ms] +Jun 23 23:01:30.860: INFO: Created: latency-svc-6jvvg +Jun 23 23:01:30.905: INFO: Got endpoints: latency-svc-wvx4j [753.590912ms] +Jun 23 23:01:30.912: INFO: Created: latency-svc-zztrx +Jun 23 23:01:30.959: INFO: Got endpoints: latency-svc-5grpr [757.509431ms] +Jun 23 23:01:31.001: INFO: Got endpoints: latency-svc-4qjs4 [749.873193ms] +Jun 23 23:01:31.051: INFO: Got endpoints: latency-svc-kp57f [749.852597ms] +Jun 23 23:01:31.101: INFO: Got endpoints: latency-svc-zl58n [750.110689ms] +Jun 23 23:01:31.151: INFO: Got endpoints: latency-svc-4l4dw [741.66055ms] +Jun 23 23:01:31.201: INFO: Got endpoints: latency-svc-qjm7v [749.721593ms] +Jun 23 23:01:31.251: INFO: Got endpoints: latency-svc-4xvd9 [749.726341ms] +Jun 23 23:01:31.302: INFO: Got endpoints: latency-svc-jxzfs [750.598221ms] +Jun 23 23:01:31.353: INFO: Got endpoints: latency-svc-zd5lg [751.719296ms] +Jun 23 23:01:31.401: INFO: Got endpoints: latency-svc-nzzk5 [750.071138ms] +Jun 23 23:01:31.451: INFO: Got endpoints: latency-svc-rh9rs [750.0244ms] +Jun 23 23:01:31.501: INFO: Got endpoints: latency-svc-x2dsd [750.361205ms] +Jun 23 23:01:31.551: INFO: Got endpoints: latency-svc-pv4bd [750.15374ms] +Jun 23 23:01:31.601: INFO: Got endpoints: latency-svc-6jvvg [750.017764ms] +Jun 23 23:01:31.651: INFO: Got endpoints: latency-svc-zztrx [746.323118ms] +Jun 23 23:01:31.651: INFO: Latencies: [13.838656ms 22.35143ms 31.24223ms 39.793428ms 48.853283ms 57.135198ms 65.591348ms 74.31305ms 84.782497ms 101.044033ms 109.514146ms 117.976467ms 126.571239ms 135.268779ms 139.785296ms 139.883634ms 140.549638ms 145.263324ms 147.32269ms 147.338871ms 147.459443ms 147.706606ms 147.822426ms 149.254892ms 149.337783ms 149.341909ms 151.287305ms 151.796401ms 153.232196ms 157.318228ms 158.736704ms 159.032184ms 159.100613ms 159.124321ms 159.298387ms 160.981222ms 176.186108ms 217.697414ms 261.619111ms 303.101623ms 341.461208ms 372.182355ms 412.207586ms 445.132018ms 487.680449ms 528.998533ms 569.861877ms 603.423715ms 644.815272ms 686.748285ms 728.570102ms 741.284673ms 741.400862ms 741.66055ms 746.323118ms 746.533804ms 747.022303ms 747.232458ms 747.75745ms 747.778631ms 749.24614ms 749.249737ms 749.342958ms 749.385954ms 749.414048ms 749.503787ms 749.531294ms 749.575934ms 749.597753ms 749.59835ms 749.604048ms 749.637575ms 749.641415ms 749.659095ms 749.667649ms 749.674169ms 749.675533ms 749.691383ms 749.693269ms 749.702808ms 749.702993ms 749.705481ms 749.707363ms 749.712893ms 749.721593ms 749.722035ms 749.722286ms 749.726341ms 749.738186ms 749.745365ms 749.746521ms 749.750284ms 749.756597ms 749.758325ms 749.762716ms 749.78029ms 749.793523ms 749.7958ms 749.81119ms 749.812556ms 749.81783ms 749.824718ms 749.84525ms 749.845749ms 749.85013ms 749.85041ms 749.850968ms 749.852597ms 749.854813ms 749.856743ms 749.862796ms 749.86895ms 749.869421ms 749.873193ms 749.876447ms 749.884021ms 749.893467ms 749.898064ms 749.900411ms 749.929824ms 749.934611ms 749.938151ms 749.938188ms 749.939372ms 749.944638ms 749.945171ms 749.945379ms 749.948109ms 749.966813ms 749.967601ms 749.971405ms 749.980676ms 749.982934ms 749.992615ms 749.999828ms 749.999951ms 750.003957ms 750.00408ms 750.004478ms 750.008416ms 750.017279ms 750.017764ms 750.023572ms 750.0244ms 750.031146ms 750.031446ms 750.036953ms 750.039602ms 750.041551ms 750.047543ms 750.060372ms 750.064163ms 750.064427ms 750.071138ms 750.075419ms 750.078194ms 750.092627ms 750.093093ms 750.10383ms 750.10396ms 750.110689ms 750.113646ms 750.117652ms 750.122991ms 750.124461ms 750.125711ms 750.140526ms 750.15348ms 750.15374ms 750.153923ms 750.157682ms 750.161843ms 750.16712ms 750.1878ms 750.193043ms 750.196812ms 750.198487ms 750.206192ms 750.232858ms 750.249845ms 750.276979ms 750.35592ms 750.358618ms 750.361205ms 750.390541ms 750.42998ms 750.456265ms 750.480332ms 750.520397ms 750.598221ms 750.693045ms 751.719296ms 752.876419ms 752.991704ms 753.029574ms 753.590912ms 757.509431ms 758.09337ms 758.337592ms 758.637464ms] +Jun 23 23:01:31.652: INFO: 50 %ile: 749.81783ms +Jun 23 23:01:31.652: INFO: 90 %ile: 750.276979ms +Jun 23 23:01:31.652: INFO: 99 %ile: 758.337592ms +Jun 23 23:01:31.652: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:01:31.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-svc-latency-9mdsq" for this suite. +Jun 23 23:01:45.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:01:45.727: INFO: namespace: e2e-tests-svc-latency-9mdsq, resource: bindings, ignored listing per whitelist +Jun 23 23:01:45.751: INFO: namespace e2e-tests-svc-latency-9mdsq deletion completed in 14.094922563s + +• [SLOW TEST:25.887 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should not be very high [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:01:45.751: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating a pod to test downward api env vars +Jun 23 23:01:45.829: INFO: Waiting up to 5m0s for pod "downward-api-dfb07986-960a-11e9-9086-ba438756bc32" in namespace "e2e-tests-downward-api-k2fn5" to be "success or failure" +Jun 23 23:01:45.832: INFO: Pod "downward-api-dfb07986-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.778096ms +Jun 23 23:01:47.836: INFO: Pod "downward-api-dfb07986-960a-11e9-9086-ba438756bc32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00634163s +Jun 23 23:01:49.839: INFO: Pod "downward-api-dfb07986-960a-11e9-9086-ba438756bc32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009778641s +STEP: Saw pod success +Jun 23 23:01:49.839: INFO: Pod "downward-api-dfb07986-960a-11e9-9086-ba438756bc32" satisfied condition "success or failure" +Jun 23 23:01:49.842: INFO: Trying to get logs from node minion pod downward-api-dfb07986-960a-11e9-9086-ba438756bc32 container dapi-container: +STEP: delete the pod +Jun 23 23:01:49.859: INFO: Waiting for pod downward-api-dfb07986-960a-11e9-9086-ba438756bc32 to disappear +Jun 23 23:01:49.864: INFO: Pod downward-api-dfb07986-960a-11e9-9086-ba438756bc32 no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:01:49.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-downward-api-k2fn5" for this suite. +Jun 23 23:01:55.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:01:55.959: INFO: namespace: e2e-tests-downward-api-k2fn5, resource: bindings, ignored listing per whitelist +Jun 23 23:01:55.965: INFO: namespace e2e-tests-downward-api-k2fn5 deletion completed in 6.096531089s + +• [SLOW TEST:10.214 seconds] +[sig-node] Downward API +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:01:55.965: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:02:00.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubelet-test-7f7fd" for this suite. +Jun 23 23:02:46.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:02:46.125: INFO: namespace: e2e-tests-kubelet-test-7f7fd, resource: bindings, ignored listing per whitelist +Jun 23 23:02:46.154: INFO: namespace e2e-tests-kubelet-test-7f7fd deletion completed in 46.091807717s + +• [SLOW TEST:50.190 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + when scheduling a read only busybox container + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:186 + should not write to root filesystem [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:02:46.155: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: Creating configMap with name cm-test-opt-del-03b16629-960b-11e9-9086-ba438756bc32 +STEP: Creating configMap with name cm-test-opt-upd-03b166a2-960b-11e9-9086-ba438756bc32 +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-03b16629-960b-11e9-9086-ba438756bc32 +STEP: Updating configmap cm-test-opt-upd-03b166a2-960b-11e9-9086-ba438756bc32 +STEP: Creating configMap with name cm-test-opt-create-03b166d6-960b-11e9-9086-ba438756bc32 +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:02:54.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-configmap-5spmh" for this suite. +Jun 23 23:03:16.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:03:16.422: INFO: namespace: e2e-tests-configmap-5spmh, resource: bindings, ignored listing per whitelist +Jun 23 23:03:16.429: INFO: namespace e2e-tests-configmap-5spmh deletion completed in 22.089734226s + +• [SLOW TEST:30.274 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl label + should update the label on a resource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153 +STEP: Creating a kubernetes client +Jun 23 23:03:16.429: INFO: >>> kubeConfig: /tmp/kubeconfig-365229432 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:243 +[BeforeEach] [k8s.io] Kubectl label + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1052 +STEP: creating the pod +Jun 23 23:03:16.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 create -f - --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:17.230: INFO: stderr: "" +Jun 23 23:03:17.230: INFO: stdout: "pod/pause created\n" +Jun 23 23:03:17.230: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] +Jun 23 23:03:17.230: INFO: Waiting up to 5m0s for pod "pause" in namespace "e2e-tests-kubectl-dbbjm" to be "running and ready" +Jun 23 23:03:17.234: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.994233ms +Jun 23 23:03:19.238: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007479097s +Jun 23 23:03:21.241: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.010929855s +Jun 23 23:03:21.241: INFO: Pod "pause" satisfied condition "running and ready" +Jun 23 23:03:21.241: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] +[It] should update the label on a resource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +STEP: adding the label testing-label with value testing-label-value to a pod +Jun 23 23:03:21.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 label pods pause testing-label=testing-label-value --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:21.376: INFO: stderr: "" +Jun 23 23:03:21.376: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod has the label testing-label with the value testing-label-value +Jun 23 23:03:21.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pod pause -L testing-label --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:21.518: INFO: stderr: "" +Jun 23 23:03:21.518: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" +STEP: removing the label testing-label of a pod +Jun 23 23:03:21.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 label pods pause testing-label- --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:21.670: INFO: stderr: "" +Jun 23 23:03:21.670: INFO: stdout: "pod/pause labeled\n" +STEP: verifying the pod doesn't have the label testing-label +Jun 23 23:03:21.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pod pause -L testing-label --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:21.812: INFO: stderr: "" +Jun 23 23:03:21.812: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" +[AfterEach] [k8s.io] Kubectl label + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1059 +STEP: using delete to clean up resources +Jun 23 23:03:21.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 delete --grace-period=0 --force -f - --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:21.958: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 23 23:03:21.958: INFO: stdout: "pod \"pause\" force deleted\n" +Jun 23 23:03:21.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get rc,svc -l name=pause --no-headers --namespace=e2e-tests-kubectl-dbbjm' +Jun 23 23:03:22.100: INFO: stderr: "No resources found.\n" +Jun 23 23:03:22.100: INFO: stdout: "" +Jun 23 23:03:22.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-365229432 get pods -l name=pause --namespace=e2e-tests-kubectl-dbbjm -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' +Jun 23 23:03:22.245: INFO: stderr: "" +Jun 23 23:03:22.245: INFO: stdout: "" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154 +Jun 23 23:03:22.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "e2e-tests-kubectl-dbbjm" for this suite. +Jun 23 23:03:28.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 23 23:03:28.292: INFO: namespace: e2e-tests-kubectl-dbbjm, resource: bindings, ignored listing per whitelist +Jun 23 23:03:28.340: INFO: namespace e2e-tests-kubectl-dbbjm deletion completed in 6.09157374s + +• [SLOW TEST:11.911 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:22 + [k8s.io] Kubectl label + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694 + should update the label on a resource [Conformance] + /workspace/anago-v1.13.0-rc.2.1+ddf47ac13c1a94/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699 +------------------------------ +Jun 23 23:03:28.341: INFO: Running AfterSuite actions on all nodes +Jun 23 23:03:28.341: INFO: Running AfterSuite actions on node 1 +Jun 23 23:03:28.341: INFO: Skipping dumping logs from cluster + +Ran 200 of 1946 Specs in 6667.070 seconds +SUCCESS! -- 200 Passed | 0 Failed | 0 Pending | 1746 Skipped PASS + +Ginkgo ran 1 suite in 1h51m8.115536854s +Test Suite Passed diff --git a/v1.13/snaps-kubernetes/junit_01.xml b/v1.13/snaps-kubernetes/junit_01.xml new file mode 100644 index 0000000000..d456acc509 --- /dev/null +++ b/v1.13/snaps-kubernetes/junit_01.xml @@ -0,0 +1,5441 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/v1.14/snaps-kubernetes/PRODUCT.yaml b/v1.14/snaps-kubernetes/PRODUCT.yaml new file mode 100644 index 0000000000..a12d07cd9a --- /dev/null +++ b/v1.14/snaps-kubernetes/PRODUCT.yaml @@ -0,0 +1,8 @@ +vendor: CableLabs +name: SNAPS-Kubernetes +version: v1.2 +website_url: https://github.com/cablelabs/snaps-kubernetes +documentation_url: https://github.com/cablelabs/snaps-kubernetes/blob/master/doc/source/install/install.md +type: installer +description: 'An installation tool to install Kubernetes on a Linux machines that have been initialized with SNAPS-Boot.' +product_logo_url: https://brandfolder.com/cablelabs/attachments/oozlse-fpkc80-8r1zme?dl=true&resource_key=oopwe3-ac5uj4-gf3qau&resource_type=Brandfolder \ No newline at end of file diff --git a/v1.14/snaps-kubernetes/README.md b/v1.14/snaps-kubernetes/README.md new file mode 100644 index 0000000000..9e702b141a --- /dev/null +++ b/v1.14/snaps-kubernetes/README.md @@ -0,0 +1,825 @@ +# Installation + +This document serves as a user guide specifying the steps/actions user must +perform to bring up a Kubernetes cluster using SNAPS-Kubernetes. The document +also gives an overview of deployment architecture, hardware and software +requirements that must be fulfilled to bring up a Kubernetes cluster. + +This document covers: + +- High level overview of the SNAPS-Kubernetes components +- Provisioning of various configuration yaml files +- Deployment of the SNAPS-Kubernetes environment + +The intended audience of this document includes the following: + +- Users involved in the deployment, maintenance and testing of SNAPS-Kubernetes +- Users interested in deploying a Kubernetes cluster with basic features + +## 1 Introduction + +### 1.1 Terms and Conventions + +The terms and typographical conventions used in this document are listed and +explained in below table. + +| Convention | Usage | +| ---------- | ----- | +| Host Machines | Machines in data centers which would be prepared by SNAPS-Kubernetes to serve control plane and data plane services for Kubernetes cluster. SNAPS-Kubernetes will deploy Kubernetes services on these machines. | +| Management node | Machine that will run SNAPS-Kubernetes software. | + +### 1.2 Acronyms + +The acronyms expanded below are fundamental to the information in this document. + +| Acronym | Explanation | +| ------- | ----------- | +| PXE | Preboot Execution Environment | +| IP | Internet Protocol | +| COTS | Commercial Off the Shelf | +| DHCP | Dynamic Host Configuration Protocol | +| TFTP | Trivial FTP | +| VLAN | Virtual Local Area Network | + +## 2 Environment Prerequisites + +Current release of SNAPS-Kubernetes requires the following Hardware and software +components. + +### 2.1 Hardware Requirements + +#### Host Machines + +| Hardware Required | Description | Configuration | +| ----------------- | ----------- | ------------- | +| Servers with 64bit Intel AMD architecture | Commodity Hardware | 16GB RAM, 80+ GB Hard disk with 2 network cards. Server should be network boot enabled. | + +#### Management Node + +| Hardware Required | Description | Configuration | +| ----------------- | ----------- | ------------- | +| Server with 64bit Intel AMD architecture | Commodity Hardware | 16GB RAM, 80+ GB Hard disk with 1 network card. | + +### 2.2 Software Requirements + +| Category | Software version | +| -------- | ---------------- | +| Operating System | Ubuntu 16. | +| Programming Language | Python 2.7.12 | +| Automation | > Ansible 2.4 | +| Framework | Kubernetes v1.14.3 | +| Containerization | Docker V17-03-CE | + +### 2.3 Network Requirements + +- At least one network interface cards required in all the node machines +- All servers should use the same naming scheme for ethernet ports. If ports on of the servers are named as eno1, eno2 etc. then ports on other servers should be named as eno1, eno2 etc. +- All host machines and the Management node should have access to the same networks where one must be routed to the Internet. +- Management node shall have http/https and ftp proxy if node is behind corporate firewall. + +## 3 Deployment View and Configurations + +Project SNAPS-Kubernetes is a Python based framework leveraging +Ansible playbooks, Kubespray and a workflow Engine. To provision your +baremetal host, it is recommended but not required to leverage SNAPS-Boot. + +![Deployment and Configuration Overview](https://raw.githubusercontent.com/wiki/cablelabs/snaps-kubernetes/images/install-deploy-config-overview-1.png?token=Al5dreR4VK2dsb7h6D5beMZmWnkZpNNNks5bTmfhwA%3D%3D) + +![Deployment and Configuration Workflow](https://raw.githubusercontent.com/wiki/cablelabs/snaps-kubernetes/images/install-deploy-config-workflow-1.png?token=Al5drVkAVPNQfJcPFNezfl1WIVYoJLbAks5bTme3wA%3D%3D) + +SNAPS-Kubernetes executes on a server that is responsible for deploying +the control and compute services on servers running Ubuntu 16.04. The +two stage deployment is outlined below. + +1. Provision nodes with 16.04 and configure network (see snaps-boot ) +1. Build server setup (snaps-kubernetes) + 1. Node setup - Install prerequisites (i.e. docker-ce 17.03) + 1. Kubernetes cluster deployment via Kubespray + 1. Post installation processes such as CNI, node labeling, and metrics server installation + +## 4 Kubernetes Cluster Deployment + +User is required to prepare a configuration file that should look like + +and the file's location will become the -f argument to the Python main +iaas_launch.py. Please see configuration parameters descriptions below. + +### 4.1 Project Configuration + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| Project_name | Y | Project name of the project (E.g. My_project). Using different project name user can install multiple cluster with same SNAPS-Kubernetes folder on different host machines. +| kubespray_branch | N | The name of the CableLabs fork of kubespray (default: 'master'). +| Git_branch | Y | Branch to checkout for Kubespray (E.g. master) | +| Version | Y | Kubernetes version (E.g. v1.14.3) | +| enable_metrics_server | N | Flag used to enable or disable Metric server. Value: True/False (Default: False) | +| enable_helm | N | Flag used to install Helm. Value: True/False (Default: False) | +| Exclusive_CPU_alloc_support | N | Should Cluster enforce exclusive CPU allocation. Value: True/False ***Currently not working*** | +| enable_logging | N | Should Cluster enforce logging. Value: True/False | +| log_level | N | Log level(fatal/error/warn/info/debug/trace) | +| logging_port | N | Logging Port (e.g. 30011) | + +### 4.2 Basic Authentication + +Parameters specified here are used to define access control mechanism for the +cluster, currently only basic http authentication is supported. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| user_name | N | User name to access the cluster | +| user_password | N | User password to access the host machine | +| user_id | N | User id to access the cluster | + +Define this set of parameters for each user, required to access the cluster. + +### 4.3 Node Configuration + +Parameters defined here specify the cluster nodes, their roles, ssh access +credential and registry access. This will come under tag node_configuration. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
HostDefine this set of parameters for each host machine (a separate host section should be defined for each host machine).
+ HostnameNHostname to be used for the machine. (It should be unique across the cluster)
+ ipNIP of the primary interface (Management Interface, allocated after OS provisioning).
+ registry_portNRegistry port of the host/master. Example: “2376 / 4386”
+ node_typeNNode type (master, minion).
+ label_keyNDefine the name for label key. Example: zone
+ label_valueNDefine the name for label value. Example: master
+ PasswordNPassword of host machine
+ UserNUser id to access the root user of the host machine
+ +### 4.4 Docker Repository + +Parameters defined here controls the deployment of private docker repository for +the cluster. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| Ip | N | Severe IP to host private Docker repository | +| Port | N | Define the registry Port. Example: - “4000” | +| password | N | Password of docker machine. Example: - ChangeMe | +| User | N | User id to access the host machine. | + +### 4.5 Proxies + +Parameters defined here specifies the proxies to be used for internet access. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| ftp_proxy | Y | Proxy to be used for FTP. (For no proxy: give value as “”) | +| http_proxy | Y | Proxy to be used for HTTP traffic. (For no proxy: give value as “”) | +| https_proxy | Y | Proxy to be used for HTTPS traffic. (For no proxy: give value as “”) | +| no_proxy | N | Comma separated list of IPs of all host machines. Localhost 127.0.0.1 should be included here. | + +### 4.6 Persistent Volume + +SNAPS-Kubernetes supports 3 approaches to provide storage to container +workloads. + +- Ceph +- HostPath +- Rook - A cloud native implementation of Ceph + +#### Ceph Volume + +***Note: Ceph support is currently broken an may be removed in the near future*** + +Parameters specified here control the installation of CEPH process on cluster +nodes. These nodes define a CEPH cluster and storage to PODs is provided from +this cluster. SNAPS-Kubernetes creates a PV and PVC for each set of +claims_parameters, which can later be consumed by application pods. + +*Required:* No + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
hostDefine this set of parameters for each host machine.
+ hostnameYHostname to be used for the machine. (It should be unique across the cluster)
+ ipYIP of the primary interface
+ node_typeYNode type (ceph_controller/ceph_osd).
+ passwordYPassword of host machine
+ userYUser id to access the host machine
+ Ceph_claimsDefine this set only for ceph_controller nodes
+ + claim_parameteresUser can define multiple claim parameters under a host
+ + + claim_nameYDefine name of persistent volume claim. For Ex. "claim2"
+ + + storageYDefines storage capacity of persistent volume claim. For Ex. "4Gi"
+ second_storageYList of OSD storage device. This field should be defined only if Node_type is ceph_osd
+ +#### Host Volume + +Parameters specified here are used to define PVC and PV for HostPath volume +type. SNAPS-Kubernetes creates a PV and PVC for each set of claim_parameters, +which can later be consumed by application pods. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
Host_VolumeUser can define multiple claims under this section
+ claim_parameteresA tag in yaml file
+ + Claim_nameYDefine name of persistent volume claim. For Ex. "claim4"
+ + storageYDefines storage capacity of Host volume claim. For Ex. "4Gi"
+ +#### Rook Volume + +Parameters specified here are used to define PV for a Rook volume. +SNAPS-Kubernetes creates a PV for each volume configured +which can later be consumed by application pods. + +*Required:* No + + + + + + + + + + + + +
ParameterOptionalityDescription
Rook_VolumenoUser can define multiple volumes under this section
+ +Rook_Volume Dictionary List keys + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
namenoPV name (cannot contain '_' or special characters {'-' ok})
sizenoThe volume size in GB
pathnoThe host_path value
+ +### 4.8 Networks + +SNAPS-Kubernetes supports following 6 solutions for cluster wide networking: + +- Weave +- Flannel +- Calico +- MacVlan +- SRIOV +- DHCP + +Weave, Calico and Flannel provide cluster wide networking and can be used as +default networking solution for the cluster. MacVlan and SRIOV on the other hand +are specific to individual nodes and are installed only on specified nodes. + +SNAPS-Kubernetes uses CNI plug-ins to orchestrate these networking solutions. + +#### Default Networks + +Parameters defined here specifies the default networking solution for the +cluster. + +SNAPS-Kubernetes install the CNI plugin for the network type defined by +parameter `networking_plugin` and creates a network to be consumed by Kubernetes +pods. User can either choose weave, flannel or calico for default networking +solution. + +*Required:* Yes + +| Parameter | Required | Description | +| --------- | -------- | ----------- | +| networking_plugin | N | Network plugin to be used for default networking. Allowed values are weave, contiv, flannel, calico, cilium (*** does not work***) | +| service_subnet | N | Subnet to be used for Kubernetes service deployments (E.g. 10.241.0.0/18) | +| pod_subnet | N | Subnet for pods networking (E.g. 10.241.64.0/18) | +| network_name | N | Default network to be created by SNAPS-Kubernetes. Note: The name should not contain any Capital letter and “_”. | +| isMaster | N | The default route will point to the primary network. One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0” “isMaster should be True for one plugin.” Value: true/false | + +#### Multus Networks + +Multus networking solution is required to support application pods with more +than one network interface. It provides a way to group multiple networking +solution and invoke them as required by the pods. + +SNAPS-Kubernetes supports Multus as a CNI plugin with following networking +providers: + +- Weave +- Flannel +- SRIOV +- MacVlan +- DHCP + +#### CNI + +List of network providers to be used under Multus. User can define any +combination of weave, flannel, SRIOV, Macvlan and DHCP. + +##### CNI Configuration + +Parameters defined are specifies the network subnet, gateway, range and other +network intrinsic parameters. + +> **Note:** User must provide configuration parameters for each network provider specified under CNI tag (mentioned above). + +#### Flannel + +***Flannel is currently broken and may comprimise the integrity of your cluster*** + +Define this section when Flannel is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
flannel_networks
+ network_nameNName of the network. SNAPS-Kubernetes creates a Flannel network for the cluster with this name. Note: The name should not contain any Capital letter and “_”.
+ networkNNetwork range in CIDR format to be used for the entire flannel network.
+ subnetNSubnet range for each node of the cluster.
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### Weave + +***Weave is currently broken and may comprimise the integrity of your cluster*** + +Define this section when Weave is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
weave_networks
+ network_nameNName of the network. SNAPS-Kubernetes creates a Weave network for the cluster with this name. Note: The name should not contain any Capital letter and “_”.
+ subnetNDefine the Subnet for network.
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### DHCP + +No configuration required. When DHCP CNI is given, SNAPS-Kubernetes configures +DHCP services on each node and facilitate dynamic IP allocation via external +DHCP server. + +#### Macvlan + +***This CNI option is being exercied and validated in CI*** + +Define this section when Macvlan is included under Multus. + +User should define these set of parameters for each host where Macvlan network is to be created. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
macvlan_networksDefine this section for each node where Macvlan network is to be deployed
+ hostnameNHostname of the node where Macvlan network is to be created
+ parent_interfaceNKubernetes creates a Vlan tagged interface for the Macvlan network. The tagged interface is created from the interface name defined here.
+ vlanidNVLAN id of the network
+ ipNIP to be assigned to vlan tagged interface. SNAPS-Kubernetes creates a separate Vlan tagged interface to be used as primary interface for Macvlan network.
+ network_nameNThis field defines the macvlan network name. Note: The name should not contain any Capital letter and "_"
+ masterNUse field parent_interface followed by vlan_id with a dot in between (parent_interface.vlanid).
+ typeNhost-local or dhcp. If dhcp used, SNAPS-Kubernetes configures this network to ask IPs from external DHCP server. If host-local used, SNAPS-Kubernetes configures + this network to ask IPs from IPAM.
+ rangeStartNFirst IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ rangeEndNLast IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ gatewayNDefine the Gateway
+ routes_dstNUse value 0.0.0.0/ (Not required in case type is dhcp).
+ subnetNDefine the Subnet for Network in CIDR format (Not required in case type is dhcp).
+ isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ +#### SRIOV + +***SRIOV is currently untested and should be used with caution*** + +Define this section when SRIOV is included under Multus. + +*Required:* Yes + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ParameterOptionalityDescription
hostDefine these set of parameters for each node where SRIOV network is to be deployed
+ hostnameHostname of the node
+ networksDefine these set of parameters for each SRIOV network be deployed on the host. User can create multiple network on the same host.
+ + network_nameNName of the SRIOV network.
+ + sriov_intfNName of the physical interface to be used for SRIOV network (the network adaptor should be SRIOV capable).
+ + typeNhost-local or dhcp. If dhcp used, SNAPS-Kubernetes configures this network to ask IPs from external DHCP server. If local-host used, SNAPS-Kubernetes configures this network to ask IPs from IPAM.
+ + rangeStartNFirst IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ + rangeEndNLast IP of the network range to be used for Macvlan network (Not required in case type is dhcp).
+ + sriov_gatewayNDefine the Gateway
+ + sriov_subnetNDefine the IP subnet for the SRIOV network.
+ + isMasterNThe "masterplugin" is the only net conf option of multus cni, it identifies the primary network. The default route will point to the primary network One of the plugin acts as a “Master” plugin and responsible for configuring k8s network with Pod interface “eth0”. Value: true/false
+ + dpdk_enableYEnable or disable the dpdk.
+ +## 5 Installation Steps + +### 5.1 Kubernetes Cluster Deployment + +#### 5.1.1 Obtain snaps-kubernetes + +Clone snaps-kubernetes: +```Shell +git clone https://github.com/cablelabs/snaps-kubernetes +``` + +#### 5.1.2 Configuration + +Go to directory `{git directory}/snaps-kubernetes/snaps_k8s` + +Modify file `k8s-deploy.yaml` for provisioning of Kubernetes nodes on cloud +cluster host machines (Master/etcd and minion). Modify this file according to +your set up environment. Refer to section 3.3. + +#### 5.1.3 Installation + +Ensure build server has python 2.7 and python-pip installed and the user account executing iaas_launch.py must has passwordless sudo access on the build server and must has their ~/.ssh/id_rsa.pub injected into the 'root' user of each host machine. + +Setup the python runtime (note: it is recommended to leverage a virtual +python runtime especially if the build server also performs functions +other than simply executing snaps-kubernetes): + +```Shell +pip install -r {path_to_repo}/requirements-git.txt +pip install -e {path_to_repo} +``` + +Ensure all host machines must have python and SSH installed, which should +be already completed if using snaps-boot to perform the initial setup. +(i.e. apt-get install -y python python-pip) + +Run `iaas_launch.py` as shown below: + +```Shell +python {path_to_repo}/iaas_launch.py -f {absolute or relative path}/k8s-deploy.yaml -k8_d +``` + +This will install Kubernetes service on host machines. The Kubernetes +installation will start and will get completed in ~60 minutes. + +> Note: if installation fails due to Error “FAILED - RETRYING: container_download | Download containers if pull is required or told to always pull (all nodes) (4 retries left).” please check your internet connection. + +Kubectl service will also be installed on bootstrap node. + +After cluster installation, if user needs to run kubectl command on bootstrap +node, please run: + +```Shell +export KUBECONFIG={project artifact dir}/node-kubeconfig.yaml +``` + +### 4.2 Cleanup Kubernetes Cluster + +Use these steps to clean an existing cluster. + +Go to directory `~/snaps-kubernetes` + +Clean up previous Kubernetes deployment: + +```Shell +python iaas_launch.py -f snaps_k8s/k8s-deploy.yaml -k8_c +``` diff --git a/v1.14/snaps-kubernetes/e2e.log b/v1.14/snaps-kubernetes/e2e.log new file mode 100644 index 0000000000..0f79b2c8b9 --- /dev/null +++ b/v1.14/snaps-kubernetes/e2e.log @@ -0,0 +1,10939 @@ +I0624 15:32:03.543156 20 test_context.go:405] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-766262415 +I0624 15:32:03.543276 20 e2e.go:240] Starting e2e run "368df000-9695-11e9-8bcb-526dc0a539dd" on Ginkgo node 1 +Running Suite: Kubernetes e2e suite +=================================== +Random Seed: 1561390322 - Will randomize all specs +Will run 204 of 3585 specs + +Jun 24 15:32:03.741: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +Jun 24 15:32:03.743: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable +Jun 24 15:32:03.759: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready +Jun 24 15:32:03.807: INFO: 14 / 14 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) +Jun 24 15:32:03.807: INFO: expected 4 pod replicas in namespace 'kube-system', 4 are Running and Ready. +Jun 24 15:32:03.807: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start +Jun 24 15:32:03.817: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) +Jun 24 15:32:03.817: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'nodelocaldns' (0 seconds elapsed) +Jun 24 15:32:03.817: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) +Jun 24 15:32:03.817: INFO: e2e test version: v1.14.3 +Jun 24 15:32:03.818: INFO: kube-apiserver version: v1.14.3 +SSS +------------------------------ +[sig-network] DNS + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:32:03.819: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename dns +Jun 24 15:32:03.873: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 24 15:32:15.950: INFO: DNS probes using dns-7895/dns-test-379bab8b-9695-11e9-8bcb-526dc0a539dd succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:32:16.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-7895" for this suite. +Jun 24 15:32:24.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:32:24.198: INFO: namespace dns-7895 deletion completed in 8.122404887s + +• [SLOW TEST:20.379 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for the cluster [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:32:24.198: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-43bcd7dd-9695-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 15:32:24.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd" in namespace "projected-9514" to be "success or failure" +Jun 24 15:32:24.241: INFO: Pod "pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046003ms +Jun 24 15:32:26.245: INFO: Pod "pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00829243s +Jun 24 15:32:28.626: INFO: Pod "pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.389816629s +STEP: Saw pod success +Jun 24 15:32:28.627: INFO: Pod "pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:32:28.631: INFO: Trying to get logs from node minion pod pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: +STEP: delete the pod +Jun 24 15:32:28.696: INFO: Waiting for pod pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:32:28.708: INFO: Pod pod-projected-configmaps-43bd3eb7-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:32:28.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9514" for this suite. +Jun 24 15:32:36.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:32:36.812: INFO: namespace projected-9514 deletion completed in 8.10134749s + +• [SLOW TEST:12.614 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] PreStop + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:32:36.813: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename prestop +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +[It] should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating server pod server in namespace prestop-9603 +STEP: Waiting for pods to come up. +STEP: Creating tester pod tester in namespace prestop-9603 +STEP: Deleting pre-stop pod +Jun 24 15:32:49.895: INFO: Saw: { + "Hostname": "server", + "Sent": null, + "Received": { + "prestop": 1 + }, + "Errors": null, + "Log": [ + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", + "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." + ], + "StillContactingPeers": true +} +STEP: Deleting the server pod +[AfterEach] [k8s.io] [sig-node] PreStop + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:32:49.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "prestop-9603" for this suite. +Jun 24 15:33:27.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:33:28.021: INFO: namespace prestop-9603 deletion completed in 38.114780397s + +• [SLOW TEST:51.208 seconds] +[k8s.io] [sig-node] PreStop +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should call prestop when killing a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:33:28.025: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-3290 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 24 15:33:28.058: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 24 15:33:50.133: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.251.128.5 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3290 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 24 15:33:50.134: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +Jun 24 15:33:51.297: INFO: Found all expected endpoints: [netserver-0] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:33:51.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-3290" for this suite. +Jun 24 15:34:13.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:13.415: INFO: namespace pod-network-test-3290 deletion completed in 22.113151957s + +• [SLOW TEST:45.389 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:13.418: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-84d7adee-9695-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 15:34:13.464: INFO: Waiting up to 5m0s for pod "pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd" in namespace "configmap-2979" to be "success or failure" +Jun 24 15:34:13.471: INFO: Pod "pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.54126ms +Jun 24 15:34:15.481: INFO: Pod "pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017137834s +STEP: Saw pod success +Jun 24 15:34:15.481: INFO: Pod "pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:34:15.483: INFO: Trying to get logs from node minion pod pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd container configmap-volume-test: +STEP: delete the pod +Jun 24 15:34:15.513: INFO: Waiting for pod pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:34:15.517: INFO: Pod pod-configmaps-84d8723f-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:15.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2979" for this suite. +Jun 24 15:34:21.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:21.636: INFO: namespace configmap-2979 deletion completed in 6.115165037s + +• [SLOW TEST:8.218 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[k8s.io] Docker Containers + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:21.639: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test use defaults +Jun 24 15:34:21.675: INFO: Waiting up to 5m0s for pod "client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd" in namespace "containers-522" to be "success or failure" +Jun 24 15:34:21.684: INFO: Pod "client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.625831ms +Jun 24 15:34:23.687: INFO: Pod "client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01219794s +Jun 24 15:34:25.691: INFO: Pod "client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016161513s +STEP: Saw pod success +Jun 24 15:34:25.691: INFO: Pod "client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:34:25.695: INFO: Trying to get logs from node minion pod client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 15:34:25.734: INFO: Waiting for pod client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:34:25.741: INFO: Pod client-containers-89bd81cf-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:25.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-522" for this suite. +Jun 24 15:34:31.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:31.862: INFO: namespace containers-522 deletion completed in 6.116368709s + +• [SLOW TEST:10.223 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should use the image defaults if command and args are blank [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:31.862: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test substitution in container's command +Jun 24 15:34:31.911: INFO: Waiting up to 5m0s for pod "var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd" in namespace "var-expansion-848" to be "success or failure" +Jun 24 15:34:31.914: INFO: Pod "var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.405233ms +Jun 24 15:34:33.919: INFO: Pod "var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00788514s +STEP: Saw pod success +Jun 24 15:34:33.919: INFO: Pod "var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:34:33.923: INFO: Trying to get logs from node minion pod var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 15:34:33.947: INFO: Waiting for pod var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:34:33.951: INFO: Pod var-expansion-8fd6fa7a-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:33.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-848" for this suite. +Jun 24 15:34:39.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:40.052: INFO: namespace var-expansion-848 deletion completed in 6.098501283s + +• [SLOW TEST:8.191 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow substituting values in a container's command [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected downwardAPI + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:40.054: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 15:34:40.099: INFO: Waiting up to 5m0s for pod "downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd" in namespace "projected-8902" to be "success or failure" +Jun 24 15:34:40.119: INFO: Pod "downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 20.661716ms +Jun 24 15:34:42.125: INFO: Pod "downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026175199s +STEP: Saw pod success +Jun 24 15:34:42.125: INFO: Pod "downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:34:42.128: INFO: Trying to get logs from node minion pod downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 15:34:42.153: INFO: Waiting for pod downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:34:42.155: INFO: Pod downwardapi-volume-94b8501e-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:42.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-8902" for this suite. +Jun 24 15:34:48.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:48.257: INFO: namespace projected-8902 deletion completed in 6.098623671s + +• [SLOW TEST:8.203 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:48.257: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 15:34:48.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd" in namespace "downward-api-4463" to be "success or failure" +Jun 24 15:34:48.300: INFO: Pod "downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.10714ms +Jun 24 15:34:50.305: INFO: Pod "downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010478835s +STEP: Saw pod success +Jun 24 15:34:50.305: INFO: Pod "downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:34:50.310: INFO: Trying to get logs from node minion pod downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 15:34:50.333: INFO: Waiting for pod downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:34:50.336: INFO: Pod downwardapi-volume-999b425b-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:50.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4463" for this suite. +Jun 24 15:34:56.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:34:56.445: INFO: namespace downward-api-4463 deletion completed in 6.106112191s + +• [SLOW TEST:8.188 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:34:56.445: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: modifying the configmap a second time +STEP: deleting the configmap +STEP: creating a watch on configmaps from the resource version returned by the first update +STEP: Expecting to observe notifications for all changes to the configmap after the first update +Jun 24 15:34:56.506: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7852,SelfLink:/api/v1/namespaces/watch-7852/configmaps/e2e-watch-test-resource-version,UID:9e7c9280-9695-11e9-b70d-fa163ef83c94,ResourceVersion:1666,Generation:0,CreationTimestamp:2019-06-24 15:34:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 24 15:34:56.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7852,SelfLink:/api/v1/namespaces/watch-7852/configmaps/e2e-watch-test-resource-version,UID:9e7c9280-9695-11e9-b70d-fa163ef83c94,ResourceVersion:1667,Generation:0,CreationTimestamp:2019-06-24 15:34:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:34:56.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-7852" for this suite. +Jun 24 15:35:02.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:35:02.599: INFO: namespace watch-7852 deletion completed in 6.090505769s + +• [SLOW TEST:6.154 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to start watching from a specific resource version [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-node] Downward API + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:35:02.599: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 24 15:35:02.653: INFO: Waiting up to 5m0s for pod "downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd" in namespace "downward-api-2161" to be "success or failure" +Jun 24 15:35:02.658: INFO: Pod "downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.032032ms +Jun 24 15:35:04.662: INFO: Pod "downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009171578s +STEP: Saw pod success +Jun 24 15:35:04.662: INFO: Pod "downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:35:04.666: INFO: Trying to get logs from node minion pod downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 15:35:04.695: INFO: Waiting for pod downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:35:04.699: INFO: Pod downward-api-a2284c70-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:35:04.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-2161" for this suite. +Jun 24 15:35:10.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:35:10.813: INFO: namespace downward-api-2161 deletion completed in 6.109942723s + +• [SLOW TEST:8.214 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide pod UID as env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:35:10.814: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-8674/configmap-test-a70f1f5d-9695-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 15:35:10.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd" in namespace "configmap-8674" to be "success or failure" +Jun 24 15:35:10.878: INFO: Pod "pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.315749ms +Jun 24 15:35:12.882: INFO: Pod "pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012664044s +STEP: Saw pod success +Jun 24 15:35:12.882: INFO: Pod "pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:35:12.886: INFO: Trying to get logs from node minion pod pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd container env-test: +STEP: delete the pod +Jun 24 15:35:12.908: INFO: Waiting for pod pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:35:12.910: INFO: Pod pod-configmaps-a70fa11f-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:35:12.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8674" for this suite. +Jun 24 15:35:18.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:35:19.023: INFO: namespace configmap-8674 deletion completed in 6.108714399s + +• [SLOW TEST:8.209 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via environment variable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:35:19.024: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 15:35:19.090: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"abf5aef2-9695-11e9-b70d-fa163ef83c94", Controller:(*bool)(0xc0028915be), BlockOwnerDeletion:(*bool)(0xc0028915bf)}} +Jun 24 15:35:19.099: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"abf38bd8-9695-11e9-b70d-fa163ef83c94", Controller:(*bool)(0xc002891786), BlockOwnerDeletion:(*bool)(0xc002891787)}} +Jun 24 15:35:19.105: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"abf478a0-9695-11e9-b70d-fa163ef83c94", Controller:(*bool)(0xc002abc576), BlockOwnerDeletion:(*bool)(0xc002abc577)}} +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:35:24.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-1802" for this suite. +Jun 24 15:35:30.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:35:30.225: INFO: namespace gc-1802 deletion completed in 6.107074393s + +• [SLOW TEST:11.201 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should not be blocked by dependency circle [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:35:30.227: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-b2a1670b-9695-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 15:35:30.282: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd" in namespace "configmap-50" to be "success or failure" +Jun 24 15:35:30.287: INFO: Pod "pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722655ms +Jun 24 15:35:32.298: INFO: Pod "pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015911803s +Jun 24 15:35:34.303: INFO: Pod "pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020175486s +STEP: Saw pod success +Jun 24 15:35:34.303: INFO: Pod "pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:35:34.306: INFO: Trying to get logs from node minion pod pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd container configmap-volume-test: +STEP: delete the pod +Jun 24 15:35:34.331: INFO: Waiting for pod pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:35:34.334: INFO: Pod pod-configmaps-b2a1d426-9695-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:35:34.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-50" for this suite. +Jun 24 15:35:40.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:35:40.459: INFO: namespace configmap-50 deletion completed in 6.122308265s + +• [SLOW TEST:10.233 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:35:40.461: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-9511 +[It] Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating stateful set ss in namespace statefulset-9511 +STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9511 +Jun 24 15:35:40.511: INFO: Found 0 stateful pods, waiting for 1 +Jun 24 15:35:50.516: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod +Jun 24 15:35:50.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 15:35:50.804: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 15:35:50.804: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 15:35:50.804: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 24 15:35:50.808: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true +Jun 24 15:36:00.816: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 24 15:36:00.816: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 24 15:36:00.833: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:00.833: INFO: ss-0 minion Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:00.833: INFO: +Jun 24 15:36:00.833: INFO: StatefulSet ss has not reached scale 3, at 1 +Jun 24 15:36:01.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994728308s +Jun 24 15:36:02.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989567682s +Jun 24 15:36:03.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985171301s +Jun 24 15:36:04.852: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.980870397s +Jun 24 15:36:05.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975742074s +Jun 24 15:36:06.861: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971527882s +Jun 24 15:36:07.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966382288s +Jun 24 15:36:08.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962112159s +Jun 24 15:36:09.874: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.717219ms +STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9511 +Jun 24 15:36:10.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:36:11.155: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 24 15:36:11.155: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 24 15:36:11.155: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 24 15:36:11.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:36:11.430: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 24 15:36:11.430: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 24 15:36:11.430: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 24 15:36:11.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:36:11.700: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" +Jun 24 15:36:11.701: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 24 15:36:11.701: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 24 15:36:11.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false +Jun 24 15:36:21.709: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 15:36:21.710: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 15:36:21.710: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Scale down will not halt with unhealthy stateful pod +Jun 24 15:36:21.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 15:36:21.979: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 15:36:21.979: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 15:36:21.979: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 24 15:36:21.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 15:36:22.247: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 15:36:22.247: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 15:36:22.247: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 24 15:36:22.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 15:36:22.526: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 15:36:22.526: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 15:36:22.526: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 24 15:36:22.526: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 24 15:36:22.530: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 +Jun 24 15:36:32.538: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false +Jun 24 15:36:32.538: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false +Jun 24 15:36:32.538: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false +Jun 24 15:36:32.549: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:32.549: INFO: ss-0 minion Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:32.549: INFO: ss-1 minion Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:32.550: INFO: ss-2 minion Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:32.550: INFO: +Jun 24 15:36:32.550: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:33.555: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:33.555: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:33.555: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:33.555: INFO: ss-2 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:33.555: INFO: +Jun 24 15:36:33.555: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:34.560: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:34.560: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:34.560: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:34.560: INFO: ss-2 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:34.560: INFO: +Jun 24 15:36:34.560: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:35.565: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:35.565: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:35.565: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:35.565: INFO: ss-2 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:35.565: INFO: +Jun 24 15:36:35.565: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:36.569: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:36.569: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:36.569: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:36.570: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:36.570: INFO: +Jun 24 15:36:36.570: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:37.574: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:37.574: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:37.574: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:37.574: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:37.574: INFO: +Jun 24 15:36:37.574: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:38.579: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:38.579: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:38.579: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:38.580: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:38.580: INFO: +Jun 24 15:36:38.580: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:39.584: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:39.584: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:39.584: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:39.584: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:39.584: INFO: +Jun 24 15:36:39.584: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:40.589: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:40.589: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:40.589: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:40.589: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:40.589: INFO: +Jun 24 15:36:40.589: INFO: StatefulSet ss has not reached scale 0, at 3 +Jun 24 15:36:41.594: INFO: POD NODE PHASE GRACE CONDITIONS +Jun 24 15:36:41.594: INFO: ss-0 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:35:40 +0000 UTC }] +Jun 24 15:36:41.594: INFO: ss-1 minion Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:41.594: INFO: ss-2 minion Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:36:00 +0000 UTC }] +Jun 24 15:36:41.594: INFO: +Jun 24 15:36:41.594: INFO: StatefulSet ss has not reached scale 0, at 3 +STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9511 +Jun 24 15:36:42.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:36:42.791: INFO: rc: 1 +Jun 24 15:36:42.791: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") + [] 0xc002da0330 exit status 1 true [0xc003197308 0xc003197358 0xc003197380] [0xc003197308 0xc003197358 0xc003197380] [0xc003197340 0xc003197378] [0x9c00a0 0x9c00a0] 0xc0027d97a0 }: +Command stdout: + +stderr: +error: unable to upgrade connection: container not found ("nginx") + +error: +exit status 1 + +Jun 24 15:36:52.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:36:52.884: INFO: rc: 1 +Jun 24 15:36:52.884: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002da0690 exit status 1 true [0xc003197388 0xc0031973a0 0xc0031973c8] [0xc003197388 0xc0031973a0 0xc0031973c8] [0xc003197398 0xc0031973b0] [0x9c00a0 0x9c00a0] 0xc002d96060 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:02.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:02.976: INFO: rc: 1 +Jun 24 15:37:02.976: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002da09c0 exit status 1 true [0xc0031973e0 0xc0031973f8 0xc003197410] [0xc0031973e0 0xc0031973f8 0xc003197410] [0xc0031973f0 0xc003197408] [0x9c00a0 0x9c00a0] 0xc002d963c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:12.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:13.066: INFO: rc: 1 +Jun 24 15:37:13.066: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001a21e60 exit status 1 true [0xc002150e30 0xc002150e60 0xc002150e78] [0xc002150e30 0xc002150e60 0xc002150e78] [0xc002150e50 0xc002150e70] [0x9c00a0 0x9c00a0] 0xc002551380 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:23.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:23.165: INFO: rc: 1 +Jun 24 15:37:23.165: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002da0d50 exit status 1 true [0xc003197418 0xc003197430 0xc003197470] [0xc003197418 0xc003197430 0xc003197470] [0xc003197428 0xc003197450] [0x9c00a0 0x9c00a0] 0xc002d96780 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:33.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:33.257: INFO: rc: 1 +Jun 24 15:37:33.257: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16300 exit status 1 true [0xc0001720c0 0xc000172190 0xc000172370] [0xc0001720c0 0xc000172190 0xc000172370] [0xc000172148 0xc000172298] [0x9c00a0 0x9c00a0] 0xc002960720 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:43.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:43.349: INFO: rc: 1 +Jun 24 15:37:43.349: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00278e300 exit status 1 true [0xc000702070 0xc00042d110 0xc00042d380] [0xc000702070 0xc00042d110 0xc00042d380] [0xc00042d0b8 0xc00042d308] [0x9c00a0 0x9c00a0] 0xc0027d8600 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:37:53.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:37:53.440: INFO: rc: 1 +Jun 24 15:37:53.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001f6e330 exit status 1 true [0xc00054a088 0xc00054a120 0xc00054a198] [0xc00054a088 0xc00054a120 0xc00054a198] [0xc00054a0a0 0xc00054a188] [0x9c00a0 0x9c00a0] 0xc00231c840 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:03.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:03.529: INFO: rc: 1 +Jun 24 15:38:03.529: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001f6e660 exit status 1 true [0xc00054a1a8 0xc00054a208 0xc00054a2a8] [0xc00054a1a8 0xc00054a208 0xc00054a2a8] [0xc00054a1e8 0xc00054a280] [0x9c00a0 0x9c00a0] 0xc00231d440 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:13.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:13.620: INFO: rc: 1 +Jun 24 15:38:13.620: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001f6e9c0 exit status 1 true [0xc00054a300 0xc00054a348 0xc00054a380] [0xc00054a300 0xc00054a348 0xc00054a380] [0xc00054a328 0xc00054a370] [0x9c00a0 0x9c00a0] 0xc00231df80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:23.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:23.713: INFO: rc: 1 +Jun 24 15:38:23.713: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef4360 exit status 1 true [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000105b0 0xc000010730] [0x9c00a0 0x9c00a0] 0xc00204a960 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:33.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:33.804: INFO: rc: 1 +Jun 24 15:38:33.804: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16660 exit status 1 true [0xc000172380 0xc000172540 0xc000172af8] [0xc000172380 0xc000172540 0xc000172af8] [0xc0001723e0 0xc000172ae8] [0x9c00a0 0x9c00a0] 0xc002960ae0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:43.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:43.893: INFO: rc: 1 +Jun 24 15:38:43.893: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef46c0 exit status 1 true [0xc000010878 0xc000010950 0xc0000109a8] [0xc000010878 0xc000010950 0xc0000109a8] [0xc0000108f8 0xc000010998] [0x9c00a0 0x9c00a0] 0xc001bc8cc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:38:53.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:38:53.982: INFO: rc: 1 +Jun 24 15:38:53.982: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef4ab0 exit status 1 true [0xc0000109c8 0xc000010b00 0xc000010bb0] [0xc0000109c8 0xc000010b00 0xc000010bb0] [0xc000010a88 0xc000010ba0] [0x9c00a0 0x9c00a0] 0xc0019f35c0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:03.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:04.073: INFO: rc: 1 +Jun 24 15:39:04.073: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16990 exit status 1 true [0xc000172b18 0xc000172c78 0xc000172e08] [0xc000172b18 0xc000172c78 0xc000172e08] [0xc000172be0 0xc000172d98] [0x9c00a0 0x9c00a0] 0xc002960e40 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:14.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:14.172: INFO: rc: 1 +Jun 24 15:39:14.172: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16cc0 exit status 1 true [0xc000172e60 0xc000173010 0xc000173158] [0xc000172e60 0xc000173010 0xc000173158] [0xc000172f48 0xc000173108] [0x9c00a0 0x9c00a0] 0xc0029611a0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:24.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:24.266: INFO: rc: 1 +Jun 24 15:39:24.266: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef4e40 exit status 1 true [0xc000010bd0 0xc000010c40 0xc000010c90] [0xc000010bd0 0xc000010c40 0xc000010c90] [0xc000010c10 0xc000010c78] [0x9c00a0 0x9c00a0] 0xc001271f80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:34.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:34.354: INFO: rc: 1 +Jun 24 15:39:34.354: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16330 exit status 1 true [0xc0007021b8 0xc000172100 0xc000172240] [0xc0007021b8 0xc000172100 0xc000172240] [0xc0001720c0 0xc000172190] [0x9c00a0 0x9c00a0] 0xc001bd81e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:44.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:44.444: INFO: rc: 1 +Jun 24 15:39:44.444: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e166c0 exit status 1 true [0xc000172298 0xc000172390 0xc000172a98] [0xc000172298 0xc000172390 0xc000172a98] [0xc000172380 0xc000172540] [0x9c00a0 0x9c00a0] 0xc001bc8180 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:39:54.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:39:54.535: INFO: rc: 1 +Jun 24 15:39:54.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00278e330 exit status 1 true [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000105b0 0xc000010730] [0x9c00a0 0x9c00a0] 0xc00225e4e0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:04.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:04.629: INFO: rc: 1 +Jun 24 15:40:04.629: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef4330 exit status 1 true [0xc00042d0b8 0xc00042d308 0xc00042d488] [0xc00042d0b8 0xc00042d308 0xc00042d488] [0xc00042d248 0xc00042d478] [0x9c00a0 0x9c00a0] 0xc00231c840 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:14.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:14.718: INFO: rc: 1 +Jun 24 15:40:14.718: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002ef4750 exit status 1 true [0xc00042d4f0 0xc00042d688 0xc00042d820] [0xc00042d4f0 0xc00042d688 0xc00042d820] [0xc00042d660 0xc00042d7a8] [0x9c00a0 0x9c00a0] 0xc00231d440 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:24.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:24.807: INFO: rc: 1 +Jun 24 15:40:24.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00278e6c0 exit status 1 true [0xc000010878 0xc000010950 0xc0000109a8] [0xc000010878 0xc000010950 0xc0000109a8] [0xc0000108f8 0xc000010998] [0x9c00a0 0x9c00a0] 0xc002960300 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:34.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:34.896: INFO: rc: 1 +Jun 24 15:40:34.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16a20 exit status 1 true [0xc000172ae8 0xc000172b78 0xc000172d50] [0xc000172ae8 0xc000172b78 0xc000172d50] [0xc000172b18 0xc000172c78] [0x9c00a0 0x9c00a0] 0xc0027d8240 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:44.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:44.985: INFO: rc: 1 +Jun 24 15:40:44.985: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00278ea20 exit status 1 true [0xc0000109c8 0xc000010b00 0xc000010bb0] [0xc0000109c8 0xc000010b00 0xc000010bb0] [0xc000010a88 0xc000010ba0] [0x9c00a0 0x9c00a0] 0xc002960900 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:40:54.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:40:55.078: INFO: rc: 1 +Jun 24 15:40:55.078: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e16de0 exit status 1 true [0xc000172d98 0xc000172e98 0xc000173078] [0xc000172d98 0xc000172e98 0xc000173078] [0xc000172e60 0xc000173010] [0x9c00a0 0x9c00a0] 0xc0027d8a80 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:41:05.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:41:05.183: INFO: rc: 1 +Jun 24 15:41:05.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001f6e360 exit status 1 true [0xc00054a088 0xc00054a120 0xc00054a198] [0xc00054a088 0xc00054a120 0xc00054a198] [0xc00054a0a0 0xc00054a188] [0x9c00a0 0x9c00a0] 0xc001cb9560 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:41:15.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:41:15.276: INFO: rc: 1 +Jun 24 15:41:15.276: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc002e17260 exit status 1 true [0xc000173108 0xc0001732a8 0xc000173318] [0xc000173108 0xc0001732a8 0xc000173318] [0xc0001731e8 0xc0001732d8] [0x9c00a0 0x9c00a0] 0xc0027d9260 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:41:25.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:41:25.371: INFO: rc: 1 +Jun 24 15:41:25.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc00278eea0 exit status 1 true [0xc000010bd0 0xc000010c40 0xc000010c90] [0xc000010bd0 0xc000010c40 0xc000010c90] [0xc000010c10 0xc000010c78] [0x9c00a0 0x9c00a0] 0xc002960c60 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:41:35.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:41:35.466: INFO: rc: 1 +Jun 24 15:41:35.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found + [] 0xc001f6e630 exit status 1 true [0xc00042d8f0 0xc00054a1c0 0xc00054a250] [0xc00042d8f0 0xc00054a1c0 0xc00054a250] [0xc00054a1a8 0xc00054a208] [0x9c00a0 0x9c00a0] 0xc001d1cfc0 }: +Command stdout: + +stderr: +Error from server (NotFound): pods "ss-0" not found + +error: +exit status 1 + +Jun 24 15:41:45.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-9511 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 15:41:45.556: INFO: rc: 1 +Jun 24 15:41:45.556: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: +Jun 24 15:41:45.556: INFO: Scaling statefulset ss to 0 +Jun 24 15:41:45.566: INFO: Waiting for statefulset status.replicas updated to 0 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 24 15:41:45.569: INFO: Deleting all statefulset in ns statefulset-9511 +Jun 24 15:41:45.571: INFO: Scaling statefulset ss to 0 +Jun 24 15:41:45.581: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 24 15:41:45.586: INFO: Deleting statefulset ss +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:41:45.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-9511" for this suite. +Jun 24 15:41:51.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:41:51.704: INFO: namespace statefulset-9511 deletion completed in 6.105855495s + +• [SLOW TEST:371.243 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + Burst scaling should run to completion even with unhealthy pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:41:51.704: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-9600529b-9696-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 15:41:51.749: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd" in namespace "projected-9989" to be "success or failure" +Jun 24 15:41:51.761: INFO: Pod "pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661467ms +Jun 24 15:41:53.765: INFO: Pod "pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014724001s +STEP: Saw pod success +Jun 24 15:41:53.765: INFO: Pod "pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 15:41:53.768: INFO: Trying to get logs from node minion pod pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd container projected-secret-volume-test: +STEP: delete the pod +Jun 24 15:41:53.795: INFO: Waiting for pod pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd to disappear +Jun 24 15:41:53.798: INFO: Pod pod-projected-secrets-9600ec2e-9696-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:41:53.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9989" for this suite. +Jun 24 15:41:59.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:41:59.902: INFO: namespace projected-9989 deletion completed in 6.101493612s + +• [SLOW TEST:8.198 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:41:59.903: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 24 15:41:59.934: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 24 15:41:59.941: INFO: Waiting for terminating namespaces to be deleted... +Jun 24 15:41:59.943: INFO: +Logging pods the kubelet thinks is on node minion before test +Jun 24 15:41:59.952: INFO: kube-proxy-d8w54 from kube-system started at 2019-06-24 15:29:46 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container kube-proxy ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: weave-scope-app-5bcb7f46b9-pv6gl from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container app ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: weave-scope-agent-mmtsr from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container agent ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-24 15:31:39 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: nginx-proxy-minion from kube-system started at (0 container statuses recorded) +Jun 24 15:41:59.952: INFO: weave-net-p4t4q from kube-system started at 2019-06-24 15:29:30 +0000 UTC (2 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container weave ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: Container weave-npc ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: coredns-97c4b444f-9954l from kube-system started at 2019-06-24 15:30:06 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container coredns ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: nodelocaldns-vmsgk from kube-system started at 2019-06-24 15:30:09 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container node-cache ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: kubernetes-dashboard-6c7466966c-v95zd from kube-system started at 2019-06-24 15:30:10 +0000 UTC (1 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: sonobuoy-systemd-logs-daemon-set-7e1461ca4731443f-8ql79 from heptio-sonobuoy started at 2019-06-24 15:31:43 +0000 UTC (2 container statuses recorded) +Jun 24 15:41:59.952: INFO: Container sonobuoy-systemd-logs-config ready: true, restart count 0 +Jun 24 15:41:59.952: INFO: Container sonobuoy-worker ready: true, restart count 0 +[It] validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to schedule Pod with nonempty NodeSelector. +STEP: Considering event: +Type = [Warning], Name = [restricted-pod.15ab2cc4e43fea22], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:42:00.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-8256" for this suite. +Jun 24 15:42:06.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:42:07.091: INFO: namespace sched-pred-8256 deletion completed in 6.1136384s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:7.188 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if not matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:42:07.094: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-4231 +Jun 24 15:42:11.159: INFO: Started pod liveness-http in namespace container-probe-4231 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 24 15:42:11.162: INFO: Initial restart count of pod liveness-http is 0 +Jun 24 15:42:27.198: INFO: Restart count of pod container-probe-4231/liveness-http is now 1 (16.036128064s elapsed) +Jun 24 15:42:47.240: INFO: Restart count of pod container-probe-4231/liveness-http is now 2 (36.077289007s elapsed) +Jun 24 15:43:09.284: INFO: Restart count of pod container-probe-4231/liveness-http is now 3 (58.121753871s elapsed) +Jun 24 15:43:27.321: INFO: Restart count of pod container-probe-4231/liveness-http is now 4 (1m16.158712779s elapsed) +Jun 24 15:44:41.475: INFO: Restart count of pod container-probe-4231/liveness-http is now 5 (2m30.312412572s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:44:41.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-4231" for this suite. +Jun 24 15:44:47.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:44:47.593: INFO: namespace container-probe-4231 deletion completed in 6.102816305s + +• [SLOW TEST:160.499 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should have monotonically increasing restart count [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:44:47.593: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-fed833fc-9696-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Waiting for pod with text data +STEP: Waiting for pod with binary data +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 15:44:49.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-7793" for this suite. +Jun 24 15:45:11.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 15:45:11.791: INFO: namespace configmap-7793 deletion completed in 22.10457756s + +• [SLOW TEST:24.198 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + binary data should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 15:45:11.792: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 15:45:11.853: INFO: (0) /api/v1/nodes/minion/proxy/logs/:
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl run pod
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1583
+[It] should create a pod from an image when restart is Never  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 24 15:45:18.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2544'
+Jun 24 15:45:18.717: INFO: stderr: ""
+Jun 24 15:45:18.717: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
+STEP: verifying the pod e2e-test-nginx-pod was created
+[AfterEach] [k8s.io] Kubectl run pod
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1588
+Jun 24 15:45:18.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete pods e2e-test-nginx-pod --namespace=kubectl-2544'
+Jun 24 15:45:26.770: INFO: stderr: ""
+Jun 24 15:45:26.770: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:45:26.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2544" for this suite.
+Jun 24 15:45:32.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:45:32.874: INFO: namespace kubectl-2544 deletion completed in 6.099398906s
+
+• [SLOW TEST:14.825 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run pod
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should create a pod from an image when restart is Never  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:45:32.874: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Jun 24 15:45:32.914: INFO: Waiting up to 5m0s for pod "pod-19d4601f-9697-11e9-8bcb-526dc0a539dd" in namespace "emptydir-3357" to be "success or failure"
+Jun 24 15:45:32.925: INFO: Pod "pod-19d4601f-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.288644ms
+Jun 24 15:45:34.929: INFO: Pod "pod-19d4601f-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014437656s
+STEP: Saw pod success
+Jun 24 15:45:34.929: INFO: Pod "pod-19d4601f-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:45:34.933: INFO: Trying to get logs from node minion pod pod-19d4601f-9697-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 15:45:34.966: INFO: Waiting for pod pod-19d4601f-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:45:34.970: INFO: Pod pod-19d4601f-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:45:34.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-3357" for this suite.
+Jun 24 15:45:40.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:45:41.068: INFO: namespace emptydir-3357 deletion completed in 6.094623461s
+
+• [SLOW TEST:8.194 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:45:41.073: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Performing setup for networking test in namespace pod-network-test-2877
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun 24 15:45:41.113: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun 24 15:45:55.174: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.251.128.5:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2877 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 15:45:55.174: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 15:45:55.341: INFO: Found all expected endpoints: [netserver-0]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:45:55.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-2877" for this suite.
+Jun 24 15:46:17.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:46:17.445: INFO: namespace pod-network-test-2877 deletion completed in 22.100115882s
+
+• [SLOW TEST:36.373 seconds]
+[sig-network] Networking
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:46:17.449: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jun 24 15:46:17.491: INFO: Waiting up to 5m0s for pod "pod-346604ed-9697-11e9-8bcb-526dc0a539dd" in namespace "emptydir-1548" to be "success or failure"
+Jun 24 15:46:17.499: INFO: Pod "pod-346604ed-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.280912ms
+Jun 24 15:46:19.503: INFO: Pod "pod-346604ed-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012320463s
+Jun 24 15:46:21.507: INFO: Pod "pod-346604ed-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016318357s
+STEP: Saw pod success
+Jun 24 15:46:21.507: INFO: Pod "pod-346604ed-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:46:21.511: INFO: Trying to get logs from node minion pod pod-346604ed-9697-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 15:46:21.542: INFO: Waiting for pod pod-346604ed-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:46:21.545: INFO: Pod pod-346604ed-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:46:21.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-1548" for this suite.
+Jun 24 15:46:27.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:46:27.645: INFO: namespace emptydir-1548 deletion completed in 6.089623721s
+
+• [SLOW TEST:10.197 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
+  should create an rc from an image  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:46:27.646: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354
+[It] should create an rc from an image  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: running the image docker.io/library/nginx:1.14-alpine
+Jun 24 15:46:27.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1773'
+Jun 24 15:46:27.806: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
+Jun 24 15:46:27.806: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
+STEP: verifying the rc e2e-test-nginx-rc was created
+STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
+STEP: confirm that you can get logs from an rc
+Jun 24 15:46:27.817: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-mbtpd]
+Jun 24 15:46:27.817: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-mbtpd" in namespace "kubectl-1773" to be "running and ready"
+Jun 24 15:46:27.820: INFO: Pod "e2e-test-nginx-rc-mbtpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839639ms
+Jun 24 15:46:29.824: INFO: Pod "e2e-test-nginx-rc-mbtpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006962384s
+Jun 24 15:46:31.828: INFO: Pod "e2e-test-nginx-rc-mbtpd": Phase="Running", Reason="", readiness=true. Elapsed: 4.011082665s
+Jun 24 15:46:31.828: INFO: Pod "e2e-test-nginx-rc-mbtpd" satisfied condition "running and ready"
+Jun 24 15:46:31.829: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-mbtpd]
+Jun 24 15:46:31.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 logs rc/e2e-test-nginx-rc --namespace=kubectl-1773'
+Jun 24 15:46:31.954: INFO: stderr: ""
+Jun 24 15:46:31.954: INFO: stdout: ""
+[AfterEach] [k8s.io] Kubectl run rc
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359
+Jun 24 15:46:31.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete rc e2e-test-nginx-rc --namespace=kubectl-1773'
+Jun 24 15:46:32.057: INFO: stderr: ""
+Jun 24 15:46:32.057: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:46:32.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1773" for this suite.
+Jun 24 15:46:38.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:46:38.159: INFO: namespace kubectl-1773 deletion completed in 6.098671605s
+
+• [SLOW TEST:10.514 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl run rc
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should create an rc from an image  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Probing container 
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:46:38.164: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename container-probe
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
+[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating pod liveness-http in namespace container-probe-328
+Jun 24 15:46:42.221: INFO: Started pod liveness-http in namespace container-probe-328
+STEP: checking the pod's current state and verifying that restartCount is present
+Jun 24 15:46:42.224: INFO: Initial restart count of pod liveness-http is 0
+STEP: deleting the pod
+[AfterEach] [k8s.io] Probing container
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:50:42.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-probe-328" for this suite.
+Jun 24 15:50:48.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:50:48.854: INFO: namespace container-probe-328 deletion completed in 6.095846839s
+
+• [SLOW TEST:250.690 seconds]
+[k8s.io] Probing container
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSS
+------------------------------
+[k8s.io] Docker Containers 
+  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:50:48.854: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename containers
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test override all
+Jun 24 15:50:48.898: INFO: Waiting up to 5m0s for pod "client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd" in namespace "containers-1790" to be "success or failure"
+Jun 24 15:50:48.906: INFO: Pod "client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.709077ms
+Jun 24 15:50:50.910: INFO: Pod "client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012674098s
+Jun 24 15:50:52.915: INFO: Pod "client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016973954s
+STEP: Saw pod success
+Jun 24 15:50:52.915: INFO: Pod "client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:50:52.918: INFO: Trying to get logs from node minion pod client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 15:50:52.949: INFO: Waiting for pod client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:50:52.952: INFO: Pod client-containers-d62b272e-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [k8s.io] Docker Containers
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:50:52.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "containers-1790" for this suite.
+Jun 24 15:50:58.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:50:59.061: INFO: namespace containers-1790 deletion completed in 6.106319462s
+
+• [SLOW TEST:10.207 seconds]
+[k8s.io] Docker Containers
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] ConfigMap 
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-node] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:50:59.065: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap that has name configmap-test-emptyKey-dc40d08a-9697-11e9-8bcb-526dc0a539dd
+[AfterEach] [sig-node] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:50:59.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-2094" for this suite.
+Jun 24 15:51:05.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:51:05.221: INFO: namespace configmap-2094 deletion completed in 6.11670035s
+
+• [SLOW TEST:6.156 seconds]
+[sig-node] ConfigMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
+  should fail to create ConfigMap with empty key [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected secret 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected secret
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:51:05.221: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating projection with secret that has name projected-secret-test-dfecb812-9697-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 15:51:05.268: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd" in namespace "projected-2235" to be "success or failure"
+Jun 24 15:51:05.277: INFO: Pod "pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.178639ms
+Jun 24 15:51:07.280: INFO: Pod "pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012870208s
+Jun 24 15:51:09.285: INFO: Pod "pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017386476s
+STEP: Saw pod success
+Jun 24 15:51:09.285: INFO: Pod "pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:51:09.289: INFO: Trying to get logs from node minion pod pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd container projected-secret-volume-test: 
+STEP: delete the pod
+Jun 24 15:51:09.313: INFO: Waiting for pod pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:51:09.318: INFO: Pod pod-projected-secrets-dfed50a7-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected secret
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:51:09.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-2235" for this suite.
+Jun 24 15:51:15.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:51:15.418: INFO: namespace projected-2235 deletion completed in 6.097271207s
+
+• [SLOW TEST:10.197 seconds]
+[sig-storage] Projected secret
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSS
+------------------------------
+[k8s.io] Container Runtime blackbox test when starting a container that exits 
+  should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:51:15.419: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename container-runtime
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should run with the expected status [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
+STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
+STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
+STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
+STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
+[AfterEach] [k8s.io] Container Runtime
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:51:39.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-runtime-5956" for this suite.
+Jun 24 15:51:45.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:51:45.780: INFO: namespace container-runtime-5956 deletion completed in 6.097604615s
+
+• [SLOW TEST:30.361 seconds]
+[k8s.io] Container Runtime
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  blackbox test
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:37
+    when starting a container that exits
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
+      should run with the expected status [NodeConformance] [Conformance]
+      /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-node] Downward API 
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:51:45.786: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward api env vars
+Jun 24 15:51:45.831: INFO: Waiting up to 5m0s for pod "downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd" in namespace "downward-api-7603" to be "success or failure"
+Jun 24 15:51:45.838: INFO: Pod "downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.075675ms
+Jun 24 15:51:47.842: INFO: Pod "downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01114212s
+STEP: Saw pod success
+Jun 24 15:51:47.843: INFO: Pod "downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:51:47.846: INFO: Trying to get logs from node minion pod downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd container dapi-container: 
+STEP: delete the pod
+Jun 24 15:51:47.875: INFO: Waiting for pod downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:51:47.877: INFO: Pod downward-api-f81adc89-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:51:47.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-7603" for this suite.
+Jun 24 15:51:53.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:51:53.996: INFO: namespace downward-api-7603 deletion completed in 6.114231405s
+
+• [SLOW TEST:8.210 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:51:53.996: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating secret secrets-3359/secret-test-fd024c5c-9697-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 15:51:54.064: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd" in namespace "secrets-3359" to be "success or failure"
+Jun 24 15:51:54.075: INFO: Pod "pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.471112ms
+Jun 24 15:51:56.079: INFO: Pod "pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01526796s
+STEP: Saw pod success
+Jun 24 15:51:56.079: INFO: Pod "pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:51:56.083: INFO: Trying to get logs from node minion pod pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd container env-test: 
+STEP: delete the pod
+Jun 24 15:51:56.105: INFO: Waiting for pod pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:51:56.108: INFO: Pod pod-configmaps-fd02c91d-9697-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:51:56.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-3359" for this suite.
+Jun 24 15:52:02.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:52:02.222: INFO: namespace secrets-3359 deletion completed in 6.108833036s
+
+• [SLOW TEST:8.226 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should be consumable via the environment [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSS
+------------------------------
+[sig-network] Networking Granular Checks: Pods 
+  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-network] Networking
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:52:02.225: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename pod-network-test
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Performing setup for networking test in namespace pod-network-test-2105
+STEP: creating a selector
+STEP: Creating the service pods in kubernetes
+Jun 24 15:52:02.263: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
+STEP: Creating test pods
+Jun 24 15:52:26.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.251.128.6:8080/dial?request=hostName&protocol=udp&host=10.251.128.5&port=8081&tries=1'] Namespace:pod-network-test-2105 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 15:52:26.344: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 15:52:26.536: INFO: Waiting for endpoints: map[]
+[AfterEach] [sig-network] Networking
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:52:26.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pod-network-test-2105" for this suite.
+Jun 24 15:52:48.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:52:48.648: INFO: namespace pod-network-test-2105 deletion completed in 22.107164752s
+
+• [SLOW TEST:46.423 seconds]
+[sig-network] Networking
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
+  Granular Checks: Pods
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
+    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-api-machinery] Secrets 
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:52:48.650: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-1d931846-9698-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 15:52:48.704: INFO: Waiting up to 5m0s for pod "pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd" in namespace "secrets-5933" to be "success or failure"
+Jun 24 15:52:48.710: INFO: Pod "pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.389789ms
+Jun 24 15:52:50.713: INFO: Pod "pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008965825s
+STEP: Saw pod success
+Jun 24 15:52:50.713: INFO: Pod "pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:52:50.717: INFO: Trying to get logs from node minion pod pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd container secret-env-test: 
+STEP: delete the pod
+Jun 24 15:52:50.740: INFO: Waiting for pod pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:52:50.745: INFO: Pod pod-secrets-1d94a75e-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-api-machinery] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:52:50.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-5933" for this suite.
+Jun 24 15:52:56.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:52:56.851: INFO: namespace secrets-5933 deletion completed in 6.103129807s
+
+• [SLOW TEST:8.201 seconds]
+[sig-api-machinery] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
+  should be consumable from pods in env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSS
+------------------------------
+[sig-storage] Downward API volume 
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:52:56.851: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
+[It] should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun 24 15:52:56.895: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd" in namespace "downward-api-4675" to be "success or failure"
+Jun 24 15:52:56.904: INFO: Pod "downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.420268ms
+Jun 24 15:52:58.909: INFO: Pod "downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013818501s
+STEP: Saw pod success
+Jun 24 15:52:58.909: INFO: Pod "downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:52:58.913: INFO: Trying to get logs from node minion pod downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd container client-container: 
+STEP: delete the pod
+Jun 24 15:52:58.935: INFO: Waiting for pod downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:52:58.940: INFO: Pod downwardapi-volume-22760a86-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Downward API volume
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:52:58.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-4675" for this suite.
+Jun 24 15:53:04.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:53:05.040: INFO: namespace downward-api-4675 deletion completed in 6.097416203s
+
+• [SLOW TEST:8.189 seconds]
+[sig-storage] Downward API volume
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
+  should provide container's cpu request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:53:05.041: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the rc1
+STEP: create the rc2
+STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
+STEP: delete the rc simpletest-rc-to-be-deleted
+STEP: wait for the rc to be deleted
+STEP: Gathering metrics
+Jun 24 15:53:15.212: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 279
+	[quantile=0.9] = 302560
+	[quantile=0.99] = 400882
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 238958
+	[quantile=0.9] = 547121
+	[quantile=0.99] = 610111
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 6
+	[quantile=0.9] = 8
+	[quantile=0.99] = 36
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 17
+	[quantile=0.9] = 34
+	[quantile=0.99] = 73
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 15
+	[quantile=0.9] = 21
+	[quantile=0.99] = 41
+For namespace_queue_latency_sum:
+	[] = 1439
+For namespace_queue_latency_count:
+	[] = 76
+For namespace_retries:
+	[] = 77
+For namespace_work_duration:
+	[quantile=0.5] = 161773
+	[quantile=0.9] = 248480
+	[quantile=0.99] = 288714
+For namespace_work_duration_sum:
+	[] = 11915036
+For namespace_work_duration_count:
+	[] = 76
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:53:15.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-9093" for this suite.
+Jun 24 15:53:21.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:53:21.315: INFO: namespace gc-9093 deletion completed in 6.098023482s
+
+• [SLOW TEST:16.273 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should scale a replication controller  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:53:21.316: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265
+[It] should scale a replication controller  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating a replication controller
+Jun 24 15:53:21.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4575'
+Jun 24 15:53:21.727: INFO: stderr: ""
+Jun 24 15:53:21.727: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 24 15:53:21.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:21.838: INFO: stderr: ""
+Jun 24 15:53:21.838: INFO: stdout: "update-demo-nautilus-kx8ql update-demo-nautilus-zvq6x "
+Jun 24 15:53:21.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:21.922: INFO: stderr: ""
+Jun 24 15:53:21.922: INFO: stdout: ""
+Jun 24 15:53:21.922: INFO: update-demo-nautilus-kx8ql is created but not running
+Jun 24 15:53:26.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:27.027: INFO: stderr: ""
+Jun 24 15:53:27.027: INFO: stdout: "update-demo-nautilus-kx8ql update-demo-nautilus-zvq6x "
+Jun 24 15:53:27.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:27.115: INFO: stderr: ""
+Jun 24 15:53:27.116: INFO: stdout: "true"
+Jun 24 15:53:27.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:27.210: INFO: stderr: ""
+Jun 24 15:53:27.210: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:53:27.210: INFO: validating pod update-demo-nautilus-kx8ql
+Jun 24 15:53:27.220: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:53:27.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:53:27.220: INFO: update-demo-nautilus-kx8ql is verified up and running
+Jun 24 15:53:27.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-zvq6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:27.306: INFO: stderr: ""
+Jun 24 15:53:27.306: INFO: stdout: "true"
+Jun 24 15:53:27.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-zvq6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:27.397: INFO: stderr: ""
+Jun 24 15:53:27.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:53:27.398: INFO: validating pod update-demo-nautilus-zvq6x
+Jun 24 15:53:27.407: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:53:27.407: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:53:27.407: INFO: update-demo-nautilus-zvq6x is verified up and running
+STEP: scaling down the replication controller
+Jun 24 15:53:27.418: INFO: scanned /root for discovery docs: 
+Jun 24 15:53:27.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4575'
+Jun 24 15:53:28.539: INFO: stderr: ""
+Jun 24 15:53:28.539: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 24 15:53:28.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:28.640: INFO: stderr: ""
+Jun 24 15:53:28.640: INFO: stdout: "update-demo-nautilus-kx8ql update-demo-nautilus-zvq6x "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Jun 24 15:53:33.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:33.740: INFO: stderr: ""
+Jun 24 15:53:33.740: INFO: stdout: "update-demo-nautilus-kx8ql update-demo-nautilus-zvq6x "
+STEP: Replicas for name=update-demo: expected=1 actual=2
+Jun 24 15:53:38.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:38.834: INFO: stderr: ""
+Jun 24 15:53:38.834: INFO: stdout: "update-demo-nautilus-kx8ql "
+Jun 24 15:53:38.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:38.935: INFO: stderr: ""
+Jun 24 15:53:38.935: INFO: stdout: "true"
+Jun 24 15:53:38.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:39.024: INFO: stderr: ""
+Jun 24 15:53:39.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:53:39.024: INFO: validating pod update-demo-nautilus-kx8ql
+Jun 24 15:53:39.029: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:53:39.029: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:53:39.029: INFO: update-demo-nautilus-kx8ql is verified up and running
+STEP: scaling up the replication controller
+Jun 24 15:53:39.033: INFO: scanned /root for discovery docs: 
+Jun 24 15:53:39.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4575'
+Jun 24 15:53:40.166: INFO: stderr: ""
+Jun 24 15:53:40.166: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 24 15:53:40.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:40.277: INFO: stderr: ""
+Jun 24 15:53:40.277: INFO: stdout: "update-demo-nautilus-h4sqw update-demo-nautilus-kx8ql "
+Jun 24 15:53:40.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-h4sqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:40.369: INFO: stderr: ""
+Jun 24 15:53:40.369: INFO: stdout: ""
+Jun 24 15:53:40.369: INFO: update-demo-nautilus-h4sqw is created but not running
+Jun 24 15:53:45.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4575'
+Jun 24 15:53:45.468: INFO: stderr: ""
+Jun 24 15:53:45.468: INFO: stdout: "update-demo-nautilus-h4sqw update-demo-nautilus-kx8ql "
+Jun 24 15:53:45.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-h4sqw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:45.560: INFO: stderr: ""
+Jun 24 15:53:45.560: INFO: stdout: "true"
+Jun 24 15:53:45.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-h4sqw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:45.659: INFO: stderr: ""
+Jun 24 15:53:45.659: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:53:45.659: INFO: validating pod update-demo-nautilus-h4sqw
+Jun 24 15:53:45.669: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:53:45.669: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:53:45.669: INFO: update-demo-nautilus-h4sqw is verified up and running
+Jun 24 15:53:45.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:45.761: INFO: stderr: ""
+Jun 24 15:53:45.761: INFO: stdout: "true"
+Jun 24 15:53:45.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kx8ql -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4575'
+Jun 24 15:53:45.849: INFO: stderr: ""
+Jun 24 15:53:45.849: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:53:45.849: INFO: validating pod update-demo-nautilus-kx8ql
+Jun 24 15:53:45.854: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:53:45.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:53:45.854: INFO: update-demo-nautilus-kx8ql is verified up and running
+STEP: using delete to clean up resources
+Jun 24 15:53:45.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4575'
+Jun 24 15:53:45.954: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun 24 15:53:45.954: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Jun 24 15:53:45.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4575'
+Jun 24 15:53:46.060: INFO: stderr: "No resources found.\n"
+Jun 24 15:53:46.060: INFO: stdout: ""
+Jun 24 15:53:46.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=update-demo --namespace=kubectl-4575 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 15:53:46.175: INFO: stderr: ""
+Jun 24 15:53:46.175: INFO: stdout: "update-demo-nautilus-h4sqw\nupdate-demo-nautilus-kx8ql\n"
+Jun 24 15:53:46.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4575'
+Jun 24 15:53:46.779: INFO: stderr: "No resources found.\n"
+Jun 24 15:53:46.779: INFO: stdout: ""
+Jun 24 15:53:46.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=update-demo --namespace=kubectl-4575 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 15:53:46.876: INFO: stderr: ""
+Jun 24 15:53:46.876: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:53:46.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-4575" for this suite.
+Jun 24 15:54:08.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:54:08.973: INFO: namespace kubectl-4575 deletion completed in 22.093982103s
+
+• [SLOW TEST:47.657 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should scale a replication controller  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-apps] ReplicaSet 
+  should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:54:08.974: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename replicaset
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Given a Pod with a 'name' label pod-adoption-release is created
+STEP: When a replicaset with a matching selector is created
+STEP: Then the orphan pod is adopted
+STEP: When the matched label of one of its pods change
+Jun 24 15:54:14.043: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
+STEP: Then the pod is released
+[AfterEach] [sig-apps] ReplicaSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:54:15.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replicaset-9772" for this suite.
+Jun 24 15:54:37.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:54:37.200: INFO: namespace replicaset-9772 deletion completed in 22.135101984s
+
+• [SLOW TEST:28.226 seconds]
+[sig-apps] ReplicaSet
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should adopt matching pods on creation and release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:54:37.200: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-5e47d761-9698-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 15:54:37.258: INFO: Waiting up to 5m0s for pod "pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd" in namespace "secrets-1522" to be "success or failure"
+Jun 24 15:54:37.263: INFO: Pod "pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263667ms
+Jun 24 15:54:39.267: INFO: Pod "pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00879421s
+STEP: Saw pod success
+Jun 24 15:54:39.268: INFO: Pod "pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:54:39.271: INFO: Trying to get logs from node minion pod pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd container secret-volume-test: 
+STEP: delete the pod
+Jun 24 15:54:39.293: INFO: Waiting for pod pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:54:39.295: INFO: Pod pod-secrets-5e4853f5-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:54:39.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-1522" for this suite.
+Jun 24 15:54:45.309: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:54:45.397: INFO: namespace secrets-1522 deletion completed in 6.099101445s
+
+• [SLOW TEST:8.197 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:54:45.404: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun 24 15:54:45.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd" in namespace "projected-9588" to be "success or failure"
+Jun 24 15:54:45.459: INFO: Pod "downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.388825ms
+Jun 24 15:54:47.462: INFO: Pod "downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014307849s
+STEP: Saw pod success
+Jun 24 15:54:47.463: INFO: Pod "downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:54:47.472: INFO: Trying to get logs from node minion pod downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd container client-container: 
+STEP: delete the pod
+Jun 24 15:54:47.496: INFO: Waiting for pod downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:54:47.499: INFO: Pod downwardapi-volume-632a2216-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:54:47.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9588" for this suite.
+Jun 24 15:54:53.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:54:53.605: INFO: namespace projected-9588 deletion completed in 6.102912815s
+
+• [SLOW TEST:8.201 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's cpu limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSS
+------------------------------
+[sig-apps] ReplicationController 
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:54:53.605: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Given a Pod with a 'name' label pod-adoption is created
+STEP: When a replication controller with a matching selector is created
+STEP: Then the orphan pod is adopted
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:54:56.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-3973" for this suite.
+Jun 24 15:55:18.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:55:18.792: INFO: namespace replication-controller-3973 deletion completed in 22.104395323s
+
+• [SLOW TEST:25.187 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should adopt matching pods on creation [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:55:18.792: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun 24 15:55:18.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd" in namespace "projected-9914" to be "success or failure"
+Jun 24 15:55:18.842: INFO: Pod "downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.732622ms
+Jun 24 15:55:20.847: INFO: Pod "downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007212215s
+Jun 24 15:55:22.851: INFO: Pod "downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01162262s
+STEP: Saw pod success
+Jun 24 15:55:22.851: INFO: Pod "downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:55:22.855: INFO: Trying to get logs from node minion pod downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd container client-container: 
+STEP: delete the pod
+Jun 24 15:55:22.875: INFO: Waiting for pod downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:55:22.880: INFO: Pod downwardapi-volume-77117e6a-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:55:22.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9914" for this suite.
+Jun 24 15:55:28.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:55:28.990: INFO: namespace projected-9914 deletion completed in 6.106291269s
+
+• [SLOW TEST:10.199 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's memory limit [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:55:28.991: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0666 on tmpfs
+Jun 24 15:55:29.031: INFO: Waiting up to 5m0s for pod "pod-7d24751c-9698-11e9-8bcb-526dc0a539dd" in namespace "emptydir-5321" to be "success or failure"
+Jun 24 15:55:29.037: INFO: Pod "pod-7d24751c-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.976608ms
+Jun 24 15:55:31.041: INFO: Pod "pod-7d24751c-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009964947s
+STEP: Saw pod success
+Jun 24 15:55:31.041: INFO: Pod "pod-7d24751c-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:55:31.044: INFO: Trying to get logs from node minion pod pod-7d24751c-9698-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 15:55:31.071: INFO: Waiting for pod pod-7d24751c-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:55:31.074: INFO: Pod pod-7d24751c-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:55:31.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-5321" for this suite.
+Jun 24 15:55:37.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:55:37.177: INFO: namespace emptydir-5321 deletion completed in 6.099694915s
+
+• [SLOW TEST:8.186 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:55:37.179: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name projected-configmap-test-volume-map-8205ec24-9698-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume configMaps
+Jun 24 15:55:37.221: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd" in namespace "projected-9396" to be "success or failure"
+Jun 24 15:55:37.227: INFO: Pod "pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.83712ms
+Jun 24 15:55:39.231: INFO: Pod "pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00975951s
+STEP: Saw pod success
+Jun 24 15:55:39.231: INFO: Pod "pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:55:39.235: INFO: Trying to get logs from node minion pod pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 24 15:55:39.263: INFO: Waiting for pod pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:55:39.267: INFO: Pod pod-projected-configmaps-82065045-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:55:39.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-9396" for this suite.
+Jun 24 15:55:45.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:55:45.369: INFO: namespace projected-9396 deletion completed in 6.099817172s
+
+• [SLOW TEST:8.190 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+S
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:55:45.370: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun 24 15:55:45.421: INFO: Creating daemon "daemon-set" with a node selector
+STEP: Initially, daemon pods should not be running on any nodes.
+Jun 24 15:55:45.433: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:45.433: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Change node label to blue, check that daemon pod is launched.
+Jun 24 15:55:45.452: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:45.452: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:46.455: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:46.455: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:47.456: INFO: Number of nodes with available pods: 1
+Jun 24 15:55:47.456: INFO: Number of running nodes: 1, number of available pods: 1
+STEP: Update the node label to green, and wait for daemons to be unscheduled
+Jun 24 15:55:47.476: INFO: Number of nodes with available pods: 1
+Jun 24 15:55:47.476: INFO: Number of running nodes: 0, number of available pods: 1
+Jun 24 15:55:48.480: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:48.480: INFO: Number of running nodes: 0, number of available pods: 0
+STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
+Jun 24 15:55:48.496: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:48.496: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:49.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:49.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:50.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:50.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:51.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:51.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:52.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:52.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:53.499: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:53.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:54.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:54.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:55.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:55.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:56.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:56.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:57.500: INFO: Number of nodes with available pods: 0
+Jun 24 15:55:57.500: INFO: Node minion is running more than one daemon pod
+Jun 24 15:55:58.500: INFO: Number of nodes with available pods: 1
+Jun 24 15:55:58.500: INFO: Number of running nodes: 1, number of available pods: 1
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4438, will wait for the garbage collector to delete the pods
+Jun 24 15:55:58.567: INFO: Deleting DaemonSet.extensions daemon-set took: 6.926656ms
+Jun 24 15:55:58.867: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.386965ms
+Jun 24 15:56:06.871: INFO: Number of nodes with available pods: 0
+Jun 24 15:56:06.871: INFO: Number of running nodes: 0, number of available pods: 0
+Jun 24 15:56:06.879: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4438/daemonsets","resourceVersion":"5029"},"items":null}
+
+Jun 24 15:56:06.887: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4438/pods","resourceVersion":"5029"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:56:06.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-4438" for this suite.
+Jun 24 15:56:12.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:56:13.004: INFO: namespace daemonsets-4438 deletion completed in 6.097408951s
+
+• [SLOW TEST:27.635 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should run and stop complex daemon [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:56:13.006: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0777 on node default medium
+Jun 24 15:56:13.062: INFO: Waiting up to 5m0s for pod "pod-97630893-9698-11e9-8bcb-526dc0a539dd" in namespace "emptydir-8553" to be "success or failure"
+Jun 24 15:56:13.064: INFO: Pod "pod-97630893-9698-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.432573ms
+Jun 24 15:56:15.068: INFO: Pod "pod-97630893-9698-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006799047s
+STEP: Saw pod success
+Jun 24 15:56:15.069: INFO: Pod "pod-97630893-9698-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:56:15.073: INFO: Trying to get logs from node minion pod pod-97630893-9698-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 15:56:15.108: INFO: Waiting for pod pod-97630893-9698-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:56:15.111: INFO: Pod pod-97630893-9698-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:56:15.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-8553" for this suite.
+Jun 24 15:56:21.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:56:21.237: INFO: namespace emptydir-8553 deletion completed in 6.120560288s
+
+• [SLOW TEST:8.232 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:56:21.238: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating pod pod-subpath-test-configmap-2988
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 24 15:56:21.286: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2988" in namespace "subpath-2164" to be "success or failure"
+Jun 24 15:56:21.293: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Pending", Reason="", readiness=false. Elapsed: 7.822617ms
+Jun 24 15:56:23.297: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 2.011878018s
+Jun 24 15:56:25.302: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 4.016719352s
+Jun 24 15:56:27.306: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 6.020780398s
+Jun 24 15:56:29.310: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 8.024901395s
+Jun 24 15:56:31.315: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 10.029209964s
+Jun 24 15:56:33.319: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 12.033278385s
+Jun 24 15:56:35.323: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 14.037556946s
+Jun 24 15:56:37.327: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 16.041422178s
+Jun 24 15:56:39.331: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 18.045402173s
+Jun 24 15:56:41.335: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Running", Reason="", readiness=true. Elapsed: 20.04959376s
+Jun 24 15:56:43.345: INFO: Pod "pod-subpath-test-configmap-2988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.059358778s
+STEP: Saw pod success
+Jun 24 15:56:43.345: INFO: Pod "pod-subpath-test-configmap-2988" satisfied condition "success or failure"
+Jun 24 15:56:43.348: INFO: Trying to get logs from node minion pod pod-subpath-test-configmap-2988 container test-container-subpath-configmap-2988: 
+STEP: delete the pod
+Jun 24 15:56:43.370: INFO: Waiting for pod pod-subpath-test-configmap-2988 to disappear
+Jun 24 15:56:43.376: INFO: Pod pod-subpath-test-configmap-2988 no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-2988
+Jun 24 15:56:43.376: INFO: Deleting pod "pod-subpath-test-configmap-2988" in namespace "subpath-2164"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:56:43.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-2164" for this suite.
+Jun 24 15:56:49.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:56:49.498: INFO: namespace subpath-2164 deletion completed in 6.117339846s
+
+• [SLOW TEST:28.260 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with configmap pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Pods 
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:56:49.499: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename pods
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] Pods
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135
+[It] should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating the pod
+STEP: setting up watch
+STEP: submitting the pod to kubernetes
+Jun 24 15:56:49.538: INFO: observed the pod list
+STEP: verifying the pod is in kubernetes
+STEP: verifying pod creation was observed
+STEP: deleting the pod gracefully
+STEP: verifying the kubelet observed the termination notice
+Jun 24 15:56:56.579: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
+STEP: verifying pod deletion was observed
+[AfterEach] [k8s.io] Pods
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:56:56.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "pods-6998" for this suite.
+Jun 24 15:57:02.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:57:02.689: INFO: namespace pods-6998 deletion completed in 6.102688957s
+
+• [SLOW TEST:13.190 seconds]
+[k8s.io] Pods
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should be submitted and removed [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Update Demo 
+  should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:57:02.689: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Update Demo
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265
+[It] should create and stop a replication controller  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating a replication controller
+Jun 24 15:57:02.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-8652'
+Jun 24 15:57:03.529: INFO: stderr: ""
+Jun 24 15:57:03.529: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
+STEP: waiting for all containers in name=update-demo pods to come up.
+Jun 24 15:57:03.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8652'
+Jun 24 15:57:03.638: INFO: stderr: ""
+Jun 24 15:57:03.638: INFO: stdout: "update-demo-nautilus-kz6jn update-demo-nautilus-p28sh "
+Jun 24 15:57:03.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kz6jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8652'
+Jun 24 15:57:03.727: INFO: stderr: ""
+Jun 24 15:57:03.727: INFO: stdout: ""
+Jun 24 15:57:03.727: INFO: update-demo-nautilus-kz6jn is created but not running
+Jun 24 15:57:08.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8652'
+Jun 24 15:57:08.823: INFO: stderr: ""
+Jun 24 15:57:08.823: INFO: stdout: "update-demo-nautilus-kz6jn update-demo-nautilus-p28sh "
+Jun 24 15:57:08.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kz6jn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8652'
+Jun 24 15:57:08.912: INFO: stderr: ""
+Jun 24 15:57:08.912: INFO: stdout: "true"
+Jun 24 15:57:08.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-kz6jn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8652'
+Jun 24 15:57:08.997: INFO: stderr: ""
+Jun 24 15:57:08.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:57:08.997: INFO: validating pod update-demo-nautilus-kz6jn
+Jun 24 15:57:09.006: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:57:09.006: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:57:09.006: INFO: update-demo-nautilus-kz6jn is verified up and running
+Jun 24 15:57:09.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-p28sh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8652'
+Jun 24 15:57:09.095: INFO: stderr: ""
+Jun 24 15:57:09.095: INFO: stdout: "true"
+Jun 24 15:57:09.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-p28sh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8652'
+Jun 24 15:57:09.194: INFO: stderr: ""
+Jun 24 15:57:09.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
+Jun 24 15:57:09.194: INFO: validating pod update-demo-nautilus-p28sh
+Jun 24 15:57:09.207: INFO: got data: {
+  "image": "nautilus.jpg"
+}
+
+Jun 24 15:57:09.207: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
+Jun 24 15:57:09.207: INFO: update-demo-nautilus-p28sh is verified up and running
+STEP: using delete to clean up resources
+Jun 24 15:57:09.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-8652'
+Jun 24 15:57:09.304: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun 24 15:57:09.304: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
+Jun 24 15:57:09.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8652'
+Jun 24 15:57:09.407: INFO: stderr: "No resources found.\n"
+Jun 24 15:57:09.407: INFO: stdout: ""
+Jun 24 15:57:09.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=update-demo --namespace=kubectl-8652 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 15:57:09.501: INFO: stderr: ""
+Jun 24 15:57:09.501: INFO: stdout: "update-demo-nautilus-kz6jn\nupdate-demo-nautilus-p28sh\n"
+Jun 24 15:57:10.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8652'
+Jun 24 15:57:10.110: INFO: stderr: "No resources found.\n"
+Jun 24 15:57:10.110: INFO: stdout: ""
+Jun 24 15:57:10.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=update-demo --namespace=kubectl-8652 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 15:57:10.209: INFO: stderr: ""
+Jun 24 15:57:10.209: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:57:10.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-8652" for this suite.
+Jun 24 15:57:32.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:57:32.319: INFO: namespace kubectl-8652 deletion completed in 22.104560994s
+
+• [SLOW TEST:29.630 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Update Demo
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should create and stop a replication controller  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
+  should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:57:32.322: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename container-lifecycle-hook
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] when create a pod with lifecycle hook
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61
+STEP: create the container to handle the HTTPGet hook request.
+[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the pod with lifecycle hook
+STEP: delete the pod with lifecycle hook
+Jun 24 15:57:38.405: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:38.408: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:40.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:40.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:42.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:42.414: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:44.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:44.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:46.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:46.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:48.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:48.415: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:50.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:50.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:52.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:52.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:54.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:54.414: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:56.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:56.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:57:58.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:57:58.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:58:00.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:58:00.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:58:02.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:58:02.414: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:58:04.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:58:04.413: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:58:06.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:58:06.412: INFO: Pod pod-with-prestop-exec-hook still exists
+Jun 24 15:58:08.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
+Jun 24 15:58:08.412: INFO: Pod pod-with-prestop-exec-hook no longer exists
+STEP: check prestop hook
+[AfterEach] [k8s.io] Container Lifecycle Hook
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:58:08.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "container-lifecycle-hook-2643" for this suite.
+Jun 24 15:58:30.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:58:30.523: INFO: namespace container-lifecycle-hook-2643 deletion completed in 22.093934751s
+
+• [SLOW TEST:58.201 seconds]
+[k8s.io] Container Lifecycle Hook
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  when create a pod with lifecycle hook
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40
+    should execute prestop exec hook properly [NodeConformance] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
+  should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:58:30.523: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: validating cluster-info
+Jun 24 15:58:30.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 cluster-info'
+Jun 24 15:58:30.654: INFO: stderr: ""
+Jun 24 15:58:30.654: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443\x1b[0m\n\x1b[0;32mcoredns\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443/api/v1/namespaces/kube-system/services/coredns:dns/proxy\x1b[0m\n\x1b[0;32mkubernetes-dashboard\x1b[0m is running at \x1b[0;33mhttps://10.241.0.1:443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:58:30.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-1420" for this suite.
+Jun 24 15:58:36.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:58:36.762: INFO: namespace kubectl-1420 deletion completed in 6.103828103s
+
+• [SLOW TEST:6.239 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl cluster-info
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should check if Kubernetes master services is included in cluster-info  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl logs 
+  should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:58:36.763: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl logs
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1190
+STEP: creating an rc
+Jun 24 15:58:36.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-7002'
+Jun 24 15:58:37.081: INFO: stderr: ""
+Jun 24 15:58:37.081: INFO: stdout: "replicationcontroller/redis-master created\n"
+[It] should be able to retrieve and filter logs  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Waiting for Redis master to start.
+Jun 24 15:58:38.085: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 15:58:38.086: INFO: Found 0 / 1
+Jun 24 15:58:39.086: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 15:58:39.086: INFO: Found 0 / 1
+Jun 24 15:58:40.086: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 15:58:40.086: INFO: Found 1 / 1
+Jun 24 15:58:40.086: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jun 24 15:58:40.089: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 15:58:40.089: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+STEP: checking for a matching strings
+Jun 24 15:58:40.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 logs redis-master-mpsvg redis-master --namespace=kubectl-7002'
+Jun 24 15:58:40.247: INFO: stderr: ""
+Jun 24 15:58:40.247: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jun 15:58:39.350 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jun 15:58:39.350 # Server started, Redis version 3.2.12\n1:M 24 Jun 15:58:39.350 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jun 15:58:39.350 * The server is now ready to accept connections on port 6379\n"
+STEP: limiting log lines
+Jun 24 15:58:40.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 log redis-master-mpsvg redis-master --namespace=kubectl-7002 --tail=1'
+Jun 24 15:58:40.363: INFO: stderr: ""
+Jun 24 15:58:40.363: INFO: stdout: "1:M 24 Jun 15:58:39.350 * The server is now ready to accept connections on port 6379\n"
+STEP: limiting log bytes
+Jun 24 15:58:40.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 log redis-master-mpsvg redis-master --namespace=kubectl-7002 --limit-bytes=1'
+Jun 24 15:58:40.465: INFO: stderr: ""
+Jun 24 15:58:40.465: INFO: stdout: " "
+STEP: exposing timestamps
+Jun 24 15:58:40.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 log redis-master-mpsvg redis-master --namespace=kubectl-7002 --tail=1 --timestamps'
+Jun 24 15:58:40.568: INFO: stderr: ""
+Jun 24 15:58:40.569: INFO: stdout: "2019-06-24T15:58:39.351000601Z 1:M 24 Jun 15:58:39.350 * The server is now ready to accept connections on port 6379\n"
+STEP: restricting to a time range
+Jun 24 15:58:43.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 log redis-master-mpsvg redis-master --namespace=kubectl-7002 --since=1s'
+Jun 24 15:58:43.205: INFO: stderr: ""
+Jun 24 15:58:43.205: INFO: stdout: ""
+Jun 24 15:58:43.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 log redis-master-mpsvg redis-master --namespace=kubectl-7002 --since=24h'
+Jun 24 15:58:43.316: INFO: stderr: ""
+Jun 24 15:58:43.316: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Jun 15:58:39.350 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jun 15:58:39.350 # Server started, Redis version 3.2.12\n1:M 24 Jun 15:58:39.350 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jun 15:58:39.350 * The server is now ready to accept connections on port 6379\n"
+[AfterEach] [k8s.io] Kubectl logs
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1196
+STEP: using delete to clean up resources
+Jun 24 15:58:43.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-7002'
+Jun 24 15:58:43.415: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun 24 15:58:43.415: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
+Jun 24 15:58:43.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=nginx --no-headers --namespace=kubectl-7002'
+Jun 24 15:58:43.515: INFO: stderr: "No resources found.\n"
+Jun 24 15:58:43.515: INFO: stdout: ""
+Jun 24 15:58:43.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=nginx --namespace=kubectl-7002 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 15:58:43.607: INFO: stderr: ""
+Jun 24 15:58:43.607: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:58:43.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-7002" for this suite.
+Jun 24 15:59:05.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:59:05.704: INFO: namespace kubectl-7002 deletion completed in 22.089919963s
+
+• [SLOW TEST:28.941 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl logs
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should be able to retrieve and filter logs  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Proxy server 
+  should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:59:05.705: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should support --unix-socket=/path  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Starting the proxy
+Jun 24 15:59:05.749: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-766262415 proxy --unix-socket=/tmp/kubectl-proxy-unix076242082/test'
+STEP: retrieving proxy /api/ output
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:59:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6831" for this suite.
+Jun 24 15:59:11.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:59:11.932: INFO: namespace kubectl-6831 deletion completed in 6.105696565s
+
+• [SLOW TEST:6.227 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Proxy server
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should support --unix-socket=/path  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSS
+------------------------------
+[sig-apps] Deployment 
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:59:11.932: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename deployment
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Deployment
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65
+[It] deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun 24 15:59:11.967: INFO: Creating deployment "nginx-deployment"
+Jun 24 15:59:11.973: INFO: Waiting for observed generation 1
+Jun 24 15:59:13.980: INFO: Waiting for all required pods to come up
+Jun 24 15:59:13.988: INFO: Pod name nginx: Found 10 pods out of 10
+STEP: ensuring each pod is running
+Jun 24 15:59:19.997: INFO: Waiting for deployment "nginx-deployment" to complete
+Jun 24 15:59:20.005: INFO: Updating deployment "nginx-deployment" with a non-existent image
+Jun 24 15:59:20.012: INFO: Updating deployment nginx-deployment
+Jun 24 15:59:20.012: INFO: Waiting for observed generation 2
+Jun 24 15:59:22.021: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
+Jun 24 15:59:22.025: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
+Jun 24 15:59:22.028: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
+Jun 24 15:59:22.034: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
+Jun 24 15:59:22.034: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
+Jun 24 15:59:22.037: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
+Jun 24 15:59:22.042: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
+Jun 24 15:59:22.042: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
+Jun 24 15:59:22.048: INFO: Updating deployment nginx-deployment
+Jun 24 15:59:22.048: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
+Jun 24 15:59:22.059: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
+Jun 24 15:59:22.082: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59
+Jun 24 15:59:24.113: INFO: Deployment "nginx-deployment":
+&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1956,SelfLink:/apis/apps/v1/namespaces/deployment-1956/deployments/nginx-deployment,UID:0207217a-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5824,Generation:3,CreationTimestamp:2019-06-24 15:59:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2019-06-24 15:59:22 +0000 UTC 2019-06-24 15:59:22 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-24 15:59:22 +0000 UTC 2019-06-24 15:59:11 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-5f9595f595" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}
+
+Jun 24 15:59:24.117: INFO: New ReplicaSet "nginx-deployment-5f9595f595" of Deployment "nginx-deployment":
+&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595,GenerateName:,Namespace:deployment-1956,SelfLink:/apis/apps/v1/namespaces/deployment-1956/replicasets/nginx-deployment-5f9595f595,UID:06d28dfb-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5818,Generation:3,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0207217a-9699-11e9-b70d-fa163ef83c94 0xc002a5d9e7 0xc002a5d9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
+Jun 24 15:59:24.117: INFO: All old ReplicaSets of Deployment "nginx-deployment":
+Jun 24 15:59:24.118: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8,GenerateName:,Namespace:deployment-1956,SelfLink:/apis/apps/v1/namespaces/deployment-1956/replicasets/nginx-deployment-6f478d8d8,UID:0207d700-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5819,Generation:3,CreationTimestamp:2019-06-24 15:59:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0207217a-9699-11e9-b70d-fa163ef83c94 0xc002a5dab7 0xc002a5dab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
+Jun 24 15:59:24.132: INFO: Pod "nginx-deployment-5f9595f595-46j64" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-46j64,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-46j64,UID:080d1e1f-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5793,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8390 0xc0025d8391}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.132: INFO: Pod "nginx-deployment-5f9595f595-4cxl4" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-4cxl4,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-4cxl4,UID:080f6a93-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5801,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d84b0 0xc0025d84b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.132: INFO: Pod "nginx-deployment-5f9595f595-66xt6" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-66xt6,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-66xt6,UID:080f7d4f-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5802,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d85d0 0xc0025d85d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8650} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.133: INFO: Pod "nginx-deployment-5f9595f595-8m9nr" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-8m9nr,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-8m9nr,UID:06d4b8fc-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5742,Generation:0,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d86f0 0xc0025d86f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.133: INFO: Pod "nginx-deployment-5f9595f595-b6bsc" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-b6bsc,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-b6bsc,UID:06df2564-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5747,Generation:0,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8870 0xc0025d8871}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d88f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.133: INFO: Pod "nginx-deployment-5f9595f595-c7nlf" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-c7nlf,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-c7nlf,UID:080f6025-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5805,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d89e0 0xc0025d89e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.134: INFO: Pod "nginx-deployment-5f9595f595-jc77l" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-jc77l,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-jc77l,UID:06d34103-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5721,Generation:0,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8b10 0xc0025d8b11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.134: INFO: Pod "nginx-deployment-5f9595f595-m4lnv" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-m4lnv,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-m4lnv,UID:080f6dc4-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5806,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8cb0 0xc0025d8cb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8d30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.134: INFO: Pod "nginx-deployment-5f9595f595-nlbmj" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-nlbmj,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-nlbmj,UID:080adf9c-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5825,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8dd0 0xc0025d8dd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.136: INFO: Pod "nginx-deployment-5f9595f595-pb92h" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-pb92h,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-pb92h,UID:081125dc-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5815,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d8f50 0xc0025d8f51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d8fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d8ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.136: INFO: Pod "nginx-deployment-5f9595f595-t297z" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-t297z,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-t297z,UID:080d06ae-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5875,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d9080 0xc0025d9081}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.137: INFO: Pod "nginx-deployment-5f9595f595-ts2s8" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-ts2s8,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-ts2s8,UID:06ddf821-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5746,Generation:0,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d91f0 0xc0025d91f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d92a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.137: INFO: Pod "nginx-deployment-5f9595f595-txwz2" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-5f9595f595-txwz2,GenerateName:nginx-deployment-5f9595f595-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-5f9595f595-txwz2,UID:06d4b880-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5732,Generation:0,CreationTimestamp:2019-06-24 15:59:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 5f9595f595,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-5f9595f595 06d28dfb-9699-11e9-b70d-fa163ef83c94 0xc0025d9370 0xc0025d9371}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d93f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:20 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:20 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.138: INFO: Pod "nginx-deployment-6f478d8d8-7ktgp" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-7ktgp,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-7ktgp,UID:020967fa-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5687,Generation:0,CreationTimestamp:2019-06-24 15:59:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d94e0 0xc0025d94e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:11 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.5,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b2bc7aae46be01eab0ec88fc54c58c19eca6068ebcb0ba090bff546de21dedeb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.139: INFO: Pod "nginx-deployment-6f478d8d8-c4927" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-c4927,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-c4927,UID:080cfbdd-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5867,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9640 0xc0025d9641}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d96b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d96d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.139: INFO: Pod "nginx-deployment-6f478d8d8-cfbxd" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-cfbxd,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-cfbxd,UID:020dc1e9-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5690,Generation:0,CreationTimestamp:2019-06-24 15:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9790 0xc0025d9791}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.10,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://304b3c8180c6529ad6c69c6fec3497e26e45a17137eec93ca8ac3187aa6b71a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.140: INFO: Pod "nginx-deployment-6f478d8d8-cl49h" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-cl49h,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-cl49h,UID:020dac13-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5677,Generation:0,CreationTimestamp:2019-06-24 15:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d98f0 0xc0025d98f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.8,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://cbab7cdbbb4bc52ca853299f17e84611083c137c8f9de813ac17721506c83ec9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.141: INFO: Pod "nginx-deployment-6f478d8d8-dqz4g" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-dqz4g,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-dqz4g,UID:080ac56e-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5851,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9a50 0xc0025d9a51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9ac0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9ae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.141: INFO: Pod "nginx-deployment-6f478d8d8-g262z" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-g262z,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-g262z,UID:080f76c1-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5808,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9ba0 0xc0025d9ba1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9c10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.143: INFO: Pod "nginx-deployment-6f478d8d8-g6v5c" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-g6v5c,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-g6v5c,UID:080d1331-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5784,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9cb0 0xc0025d9cb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.143: INFO: Pod "nginx-deployment-6f478d8d8-hxxz2" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-hxxz2,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-hxxz2,UID:080d0c00-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5786,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9dc0 0xc0025d9dc1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9e30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9e50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.144: INFO: Pod "nginx-deployment-6f478d8d8-j59rx" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-j59rx,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-j59rx,UID:020dbb37-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5672,Generation:0,CreationTimestamp:2019-06-24 15:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0025d9ed0 0xc0025d9ed1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0025d9f40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0025d9f60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.9,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://eb588ddeed0825aa14bc2d8ead41ef0377828381be11fa9da42be2d81c56a2c4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.145: INFO: Pod "nginx-deployment-6f478d8d8-l7tv7" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-l7tv7,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-l7tv7,UID:080d0419-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5861,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266030 0xc003266031}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032660a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032660c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.145: INFO: Pod "nginx-deployment-6f478d8d8-ncd5d" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-ncd5d,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-ncd5d,UID:020ebdf9-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5667,Generation:0,CreationTimestamp:2019-06-24 15:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266180 0xc003266181}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032661f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.14,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://070b322bec583daa2d1a6be19a852b56c36adb58b6211b0b592cac32cc37be1e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.147: INFO: Pod "nginx-deployment-6f478d8d8-nnrj4" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-nnrj4,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-nnrj4,UID:080f7a5c-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5800,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0032662e0 0xc0032662e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266350} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266370}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.147: INFO: Pod "nginx-deployment-6f478d8d8-nr95r" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-nr95r,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-nr95r,UID:080ae83a-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5841,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0032663f0 0xc0032663f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266460} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.147: INFO: Pod "nginx-deployment-6f478d8d8-qtl5h" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-qtl5h,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-qtl5h,UID:080f7908-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5807,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266540 0xc003266541}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032665b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032665d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.147: INFO: Pod "nginx-deployment-6f478d8d8-t4r87" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-t4r87,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-t4r87,UID:080f8432-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5804,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266650 0xc003266651}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032666c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032666e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.148: INFO: Pod "nginx-deployment-6f478d8d8-vc2th" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-vc2th,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-vc2th,UID:020adb4d-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5660,Generation:0,CreationTimestamp:2019-06-24 15:59:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266760 0xc003266761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0032667d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0032667f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.7,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ce34970af0177953aab24cb914064e3fc510199e0e25c263b87a64cdd429d999}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.149: INFO: Pod "nginx-deployment-6f478d8d8-vnswk" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-vnswk,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-vnswk,UID:080f6d1a-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5810,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0032668c0 0xc0032668c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.149: INFO: Pod "nginx-deployment-6f478d8d8-x8tw5" is not available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-x8tw5,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-x8tw5,UID:0809e224-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5803,Generation:0,CreationTimestamp:2019-06-24 15:59:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc0032669d0 0xc0032669d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:22 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 15:59:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.150: INFO: Pod "nginx-deployment-6f478d8d8-zbcvp" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zbcvp,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-zbcvp,UID:020db72e-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5655,Generation:0,CreationTimestamp:2019-06-24 15:59:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266b20 0xc003266b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266b90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266bb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.11,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:14 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0f3e0e5019e02282f069622dc706186ead76400a40f23533bea97c8a7e0d6139}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+Jun 24 15:59:24.150: INFO: Pod "nginx-deployment-6f478d8d8-zdm5l" is available:
+&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-6f478d8d8-zdm5l,GenerateName:nginx-deployment-6f478d8d8-,Namespace:deployment-1956,SelfLink:/api/v1/namespaces/deployment-1956/pods/nginx-deployment-6f478d8d8-zdm5l,UID:020ac41f-9699-11e9-b70d-fa163ef83c94,ResourceVersion:5682,Generation:0,CreationTimestamp:2019-06-24 15:59:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 6f478d8d8,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-6f478d8d8 0207d700-9699-11e9-b70d-fa163ef83c94 0xc003266c80 0xc003266c81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4bm7d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4bm7d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-4bm7d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003266cf0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003266d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 15:59:12 +0000 UTC  }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.6,StartTime:2019-06-24 15:59:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 15:59:13 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://4a80d1828a58f71ca82c09dfd2d20d091d02fecfcce46fe2664259d31cb82aa1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
+[AfterEach] [sig-apps] Deployment
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:59:24.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "deployment-1956" for this suite.
+Jun 24 15:59:32.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:59:32.258: INFO: namespace deployment-1956 deletion completed in 8.104668028s
+
+• [SLOW TEST:20.326 seconds]
+[sig-apps] Deployment
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  deployment should support proportional scaling [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:59:32.258: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the rc
+STEP: delete the rc
+STEP: wait for all pods to be garbage collected
+STEP: Gathering metrics
+Jun 24 15:59:42.360: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 269
+	[quantile=0.9] = 252821
+	[quantile=0.99] = 400882
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 230260
+	[quantile=0.9] = 546901
+	[quantile=0.99] = 610111
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 5
+	[quantile=0.9] = 8
+	[quantile=0.99] = 32
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 16
+	[quantile=0.9] = 30
+	[quantile=0.99] = 67
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 16
+	[quantile=0.9] = 28
+	[quantile=0.99] = 46
+For namespace_queue_latency_sum:
+	[] = 2266
+For namespace_queue_latency_count:
+	[] = 120
+For namespace_retries:
+	[] = 121
+For namespace_work_duration:
+	[quantile=0.5] = 167316
+	[quantile=0.9] = 265926
+	[quantile=0.99] = 628482
+For namespace_work_duration_sum:
+	[] = 18970293
+For namespace_work_duration_count:
+	[] = 120
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:59:42.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-9835" for this suite.
+Jun 24 15:59:48.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:59:48.467: INFO: namespace gc-9835 deletion completed in 6.101494983s
+
+• [SLOW TEST:16.209 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should delete pods created by rc when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:59:48.468: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name configmap-test-volume-map-17cd27c4-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume configMaps
+Jun 24 15:59:48.507: INFO: Waiting up to 5m0s for pod "pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd" in namespace "configmap-9551" to be "success or failure"
+Jun 24 15:59:48.514: INFO: Pod "pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107993ms
+Jun 24 15:59:50.518: INFO: Pod "pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010157362s
+STEP: Saw pod success
+Jun 24 15:59:50.518: INFO: Pod "pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:59:50.521: INFO: Trying to get logs from node minion pod pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd container configmap-volume-test: 
+STEP: delete the pod
+Jun 24 15:59:50.543: INFO: Waiting for pod pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:59:50.546: INFO: Pod pod-configmaps-17cd99ca-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:59:50.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-9551" for this suite.
+Jun 24 15:59:56.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 15:59:56.655: INFO: namespace configmap-9551 deletion completed in 6.106620704s
+
+• [SLOW TEST:8.187 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] ConfigMap 
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 15:59:56.670: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename configmap
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name configmap-test-volume-1cb10acb-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume configMaps
+Jun 24 15:59:56.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd" in namespace "configmap-1270" to be "success or failure"
+Jun 24 15:59:56.729: INFO: Pod "pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.373835ms
+Jun 24 15:59:58.733: INFO: Pod "pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021423113s
+STEP: Saw pod success
+Jun 24 15:59:58.734: INFO: Pod "pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 15:59:58.737: INFO: Trying to get logs from node minion pod pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd container configmap-volume-test: 
+STEP: delete the pod
+Jun 24 15:59:58.761: INFO: Waiting for pod pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 15:59:58.765: INFO: Pod pod-configmaps-1cb170ee-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] ConfigMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 15:59:58.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "configmap-1270" for this suite.
+Jun 24 16:00:04.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:00:04.858: INFO: namespace configmap-1270 deletion completed in 6.088745203s
+
+• [SLOW TEST:8.189 seconds]
+[sig-storage] ConfigMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
+  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:00:04.859: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-2192d5f2-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 16:00:04.906: INFO: Waiting up to 5m0s for pod "pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd" in namespace "secrets-670" to be "success or failure"
+Jun 24 16:00:04.910: INFO: Pod "pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421114ms
+Jun 24 16:00:06.914: INFO: Pod "pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007716365s
+STEP: Saw pod success
+Jun 24 16:00:06.914: INFO: Pod "pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 16:00:06.916: INFO: Trying to get logs from node minion pod pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd container secret-volume-test: 
+STEP: delete the pod
+Jun 24 16:00:06.946: INFO: Waiting for pod pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 16:00:06.949: INFO: Pod pod-secrets-21934d6b-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:00:06.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-670" for this suite.
+Jun 24 16:00:12.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:00:13.048: INFO: namespace secrets-670 deletion completed in 6.095251541s
+
+• [SLOW TEST:8.190 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:00:13.051: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace statefulset-9284
+[It] Should recreate evicted statefulset [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Looking for a node to schedule stateful set and pod
+STEP: Creating pod with conflicting port in namespace statefulset-9284
+STEP: Creating statefulset with conflicting port in namespace statefulset-9284
+STEP: Waiting until pod test-pod will start running in namespace statefulset-9284
+STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9284
+Jun 24 16:00:17.136: INFO: Observed stateful pod in namespace: statefulset-9284, name: ss-0, uid: 28c22aeb-9699-11e9-b70d-fa163ef83c94, status phase: Pending. Waiting for statefulset controller to delete.
+Jun 24 16:00:17.727: INFO: Observed stateful pod in namespace: statefulset-9284, name: ss-0, uid: 28c22aeb-9699-11e9-b70d-fa163ef83c94, status phase: Failed. Waiting for statefulset controller to delete.
+Jun 24 16:00:17.741: INFO: Observed stateful pod in namespace: statefulset-9284, name: ss-0, uid: 28c22aeb-9699-11e9-b70d-fa163ef83c94, status phase: Failed. Waiting for statefulset controller to delete.
+Jun 24 16:00:17.748: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9284
+STEP: Removing pod with conflicting port in namespace statefulset-9284
+STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9284 and will be in running state
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 24 16:00:21.784: INFO: Deleting all statefulset in ns statefulset-9284
+Jun 24 16:00:21.787: INFO: Scaling statefulset ss to 0
+Jun 24 16:00:31.806: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 24 16:00:31.809: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:00:31.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-9284" for this suite.
+Jun 24 16:00:37.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:00:37.924: INFO: namespace statefulset-9284 deletion completed in 6.099239185s
+
+• [SLOW TEST:24.873 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    Should recreate evicted statefulset [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:00:37.924: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-map-354acc3b-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 16:00:37.988: INFO: Waiting up to 5m0s for pod "pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd" in namespace "secrets-6882" to be "success or failure"
+Jun 24 16:00:37.992: INFO: Pod "pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184168ms
+Jun 24 16:00:39.996: INFO: Pod "pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00840933s
+Jun 24 16:00:42.000: INFO: Pod "pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012528527s
+STEP: Saw pod success
+Jun 24 16:00:42.000: INFO: Pod "pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 16:00:42.004: INFO: Trying to get logs from node minion pod pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd container secret-volume-test: 
+STEP: delete the pod
+Jun 24 16:00:42.032: INFO: Waiting for pod pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 16:00:42.045: INFO: Pod pod-secrets-354bbb70-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:00:42.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-6882" for this suite.
+Jun 24 16:00:48.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:00:48.153: INFO: namespace secrets-6882 deletion completed in 6.104548604s
+
+• [SLOW TEST:10.229 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl describe 
+  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:00:48.154: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun 24 16:00:48.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 version --client'
+Jun 24 16:00:48.260: INFO: stderr: ""
+Jun 24 16:00:48.260: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:44:30Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
+Jun 24 16:00:48.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-2036'
+Jun 24 16:00:48.529: INFO: stderr: ""
+Jun 24 16:00:48.529: INFO: stdout: "replicationcontroller/redis-master created\n"
+Jun 24 16:00:48.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-2036'
+Jun 24 16:00:48.775: INFO: stderr: ""
+Jun 24 16:00:48.775: INFO: stdout: "service/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun 24 16:00:49.780: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 16:00:49.780: INFO: Found 0 / 1
+Jun 24 16:00:50.780: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 16:00:50.780: INFO: Found 1 / 1
+Jun 24 16:00:50.780: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+Jun 24 16:00:50.785: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 16:00:50.785: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun 24 16:00:50.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 describe pod redis-master-mx74f --namespace=kubectl-2036'
+Jun 24 16:00:50.898: INFO: stderr: ""
+Jun 24 16:00:50.898: INFO: stdout: "Name:               redis-master-mx74f\nNamespace:          kubectl-2036\nPriority:           0\nPriorityClassName:  \nNode:               minion/10.1.0.12\nStart Time:         Mon, 24 Jun 2019 16:00:48 +0000\nLabels:             app=redis\n                    role=master\nAnnotations:        \nStatus:             Running\nIP:                 10.251.128.5\nControlled By:      ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://26e5a5277a33103a23b08228d61521eba2285b769e264f0d9c7bc0eacbcbf0fd\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 24 Jun 2019 16:00:49 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-g9f7h (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-g9f7h:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-g9f7h\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-2036/redis-master-mx74f to minion\n  Normal  Pulled     1s    kubelet, minion    Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, minion    Created container redis-master\n  Normal  Started    1s    kubelet, minion    Started container redis-master\n"
+Jun 24 16:00:50.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 describe rc redis-master --namespace=kubectl-2036'
+Jun 24 16:00:51.011: INFO: stderr: ""
+Jun 24 16:00:51.011: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2036\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: redis-master-mx74f\n"
+Jun 24 16:00:51.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 describe service redis-master --namespace=kubectl-2036'
+Jun 24 16:00:51.121: INFO: stderr: ""
+Jun 24 16:00:51.121: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2036\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.241.235.106\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.251.128.5:6379\nSession Affinity:  None\nEvents:            \n"
+Jun 24 16:00:51.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 describe node master'
+Jun 24 16:00:51.253: INFO: stderr: ""
+Jun 24 16:00:51.253: INFO: stdout: "Name:               master\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=master\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\n                    zone=master\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Mon, 24 Jun 2019 15:28:04 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Mon, 24 Jun 2019 15:29:36 +0000   Mon, 24 Jun 2019 15:29:36 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Mon, 24 Jun 2019 16:00:44 +0000   Mon, 24 Jun 2019 15:27:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Mon, 24 Jun 2019 16:00:44 +0000   Mon, 24 Jun 2019 15:27:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Mon, 24 Jun 2019 16:00:44 +0000   Mon, 24 Jun 2019 15:27:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Mon, 24 Jun 2019 16:00:44 +0000   Mon, 24 Jun 2019 15:29:22 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.1.0.11\n  Hostname:    master\nCapacity:\n cpu:                8\n ephemeral-storage:  50758760Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             32946808Ki\n pods:               110\nAllocatable:\n cpu:                7800m\n ephemeral-storage:  46779273139\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             32344408Ki\n pods:               110\nSystem Info:\n Machine ID:                 cf0103b22e87455d840ec02695143254\n System UUID:                CF0103B2-2E87-455D-840E-C02695143254\n Boot ID:                    7000569d-154a-42af-b9c1-51f773bea7e3\n Kernel Version:             4.4.0-141-generic\n OS Image:                   Ubuntu 16.04.5 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.6.2\n Kubelet Version:            v1.14.3\n Kube-Proxy Version:         v1.14.3\nPodCIDR:                     10.251.0.0/24\nNon-terminated Pods:         (10 in total)\n  Namespace                  Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                       ------------  ----------  ---------------  -------------  ---\n  heptio-sonobuoy            sonobuoy-e2e-job-5b2a161d72614acd                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m\n  heptio-sonobuoy            sonobuoy-systemd-logs-daemon-set-7e1461ca4731443f-2pk4z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m\n  kube-system                coredns-97c4b444f-8l248                                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30m\n  kube-system                dns-autoscaler-5fc5fdbf6-v2qt9                             20m (0%)      0 (0%)      10Mi (0%)        0 (0%)         30m\n  kube-system                kube-apiserver-master                                      250m (3%)     0 (0%)      0 (0%)           0 (0%)         32m\n  kube-system                kube-controller-manager-master                             200m (2%)     0 (0%)      0 (0%)           0 (0%)         32m\n  kube-system                kube-proxy-29wx4                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         31m\n  kube-system                kube-scheduler-master                                      100m (1%)     0 (0%)      0 (0%)           0 (0%)         32m\n  kube-system                nodelocaldns-9lhfh                                         100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30m\n  kube-system                weave-net-r2zvv                                            20m (0%)      0 (0%)      0 (0%)           0 (0%)         31m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                790m (10%)  0 (0%)\n  memory             150Mi (0%)  340Mi (1%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type    Reason                   Age                From                Message\n  ----    ------                   ----               ----                -------\n  Normal  NodeHasSufficientMemory  32m (x8 over 32m)  kubelet, master     Node master status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    32m (x8 over 32m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     32m (x7 over 32m)  kubelet, master     Node master status is now: NodeHasSufficientPID\n  Normal  Starting                 32m                kube-proxy, master  Starting kube-proxy.\n  Normal  Starting                 32m                kubelet, master     Starting kubelet.\n  Normal  NodeHasSufficientMemory  32m                kubelet, master     Node master status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    32m                kubelet, master     Node master status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     32m                kubelet, master     Node master status is now: NodeHasSufficientPID\n  Normal  NodeAllocatableEnforced  32m                kubelet, master     Updated Node Allocatable limit across pods\n  Normal  NodeReady                31m                kubelet, master     Node master status is now: NodeReady\n  Normal  Starting                 31m                kube-proxy, master  Starting kube-proxy.\n  Normal  Starting                 29m                kubelet, master     Starting kubelet.\n  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet, master     Node master status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet, master     Node master status is now: NodeHasSufficientPID\n  Normal  NodeAllocatableEnforced  29m                kubelet, master     Updated Node Allocatable limit across pods\n"
+Jun 24 16:00:51.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 describe namespace kubectl-2036'
+Jun 24 16:00:51.359: INFO: stderr: ""
+Jun 24 16:00:51.359: INFO: stdout: "Name:         kubectl-2036\nLabels:       e2e-framework=kubectl\n              e2e-run=368df000-9695-11e9-8bcb-526dc0a539dd\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:00:51.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2036" for this suite.
+Jun 24 16:01:13.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:01:13.450: INFO: namespace kubectl-2036 deletion completed in 22.08749386s
+
+• [SLOW TEST:25.296 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl describe
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-storage] Projected configMap 
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:01:13.450: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating configMap with name projected-configmap-test-volume-4a74d2f7-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume configMaps
+Jun 24 16:01:13.499: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd" in namespace "projected-3391" to be "success or failure"
+Jun 24 16:01:13.506: INFO: Pod "pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.741287ms
+Jun 24 16:01:15.510: INFO: Pod "pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011176311s
+STEP: Saw pod success
+Jun 24 16:01:15.510: INFO: Pod "pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 16:01:15.515: INFO: Trying to get logs from node minion pod pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: 
+STEP: delete the pod
+Jun 24 16:01:15.544: INFO: Waiting for pod pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 16:01:15.547: INFO: Pod pod-projected-configmaps-4a7546d7-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:01:15.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-3391" for this suite.
+Jun 24 16:01:21.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:01:21.656: INFO: namespace projected-3391 deletion completed in 6.106506019s
+
+• [SLOW TEST:8.207 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSS
+------------------------------
+[k8s.io] KubeletManagedEtcHosts 
+  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:01:21.657: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Setting up the test
+STEP: Creating hostNetwork=false pod
+STEP: Creating hostNetwork=true pod
+STEP: Running the test
+STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
+Jun 24 16:01:29.731: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:29.731: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:29.914: INFO: Exec stderr: ""
+Jun 24 16:01:29.914: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:29.914: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.076: INFO: Exec stderr: ""
+Jun 24 16:01:30.076: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.076: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.232: INFO: Exec stderr: ""
+Jun 24 16:01:30.232: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.232: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.385: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
+Jun 24 16:01:30.386: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.386: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.562: INFO: Exec stderr: ""
+Jun 24 16:01:30.563: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.565: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.727: INFO: Exec stderr: ""
+STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
+Jun 24 16:01:30.728: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.728: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:30.888: INFO: Exec stderr: ""
+Jun 24 16:01:30.888: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:30.888: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:31.043: INFO: Exec stderr: ""
+Jun 24 16:01:31.043: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:31.043: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:31.198: INFO: Exec stderr: ""
+Jun 24 16:01:31.198: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5554 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
+Jun 24 16:01:31.198: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+Jun 24 16:01:31.350: INFO: Exec stderr: ""
+[AfterEach] [k8s.io] KubeletManagedEtcHosts
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:01:31.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "e2e-kubelet-etc-hosts-5554" for this suite.
+Jun 24 16:02:27.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:02:27.453: INFO: namespace e2e-kubelet-etc-hosts-5554 deletion completed in 56.099428119s
+
+• [SLOW TEST:65.797 seconds]
+[k8s.io] KubeletManagedEtcHosts
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+S
+------------------------------
+[sig-storage] Projected configMap 
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:02:27.457: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating projection with configMap that has name projected-configmap-test-upd-7691f085-9699-11e9-8bcb-526dc0a539dd
+STEP: Creating the pod
+STEP: Updating configmap projected-configmap-test-upd-7691f085-9699-11e9-8bcb-526dc0a539dd
+STEP: waiting to observe update in volume
+[AfterEach] [sig-storage] Projected configMap
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:02:31.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-7145" for this suite.
+Jun 24 16:02:53.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:02:53.671: INFO: namespace projected-7145 deletion completed in 22.09708662s
+
+• [SLOW TEST:26.214 seconds]
+[sig-storage] Projected configMap
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
+  updates should be reflected in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSS
+------------------------------
+[sig-node] Downward API 
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-node] Downward API
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:02:53.671: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename downward-api
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward api env vars
+Jun 24 16:02:53.722: INFO: Waiting up to 5m0s for pod "downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd" in namespace "downward-api-206" to be "success or failure"
+Jun 24 16:02:53.727: INFO: Pod "downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.341877ms
+Jun 24 16:02:55.731: INFO: Pod "downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008611914s
+Jun 24 16:02:57.735: INFO: Pod "downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012630415s
+STEP: Saw pod success
+Jun 24 16:02:57.735: INFO: Pod "downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 16:02:57.738: INFO: Trying to get logs from node minion pod downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd container dapi-container: 
+STEP: delete the pod
+Jun 24 16:02:57.772: INFO: Waiting for pod downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 16:02:57.776: INFO: Pod downward-api-8632c40b-9699-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-node] Downward API
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:02:57.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "downward-api-206" for this suite.
+Jun 24 16:03:03.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:03:03.874: INFO: namespace downward-api-206 deletion completed in 6.095245355s
+
+• [SLOW TEST:10.203 seconds]
+[sig-node] Downward API
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38
+  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:03:03.875: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating pod pod-subpath-test-configmap-mjdz
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 24 16:03:03.922: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mjdz" in namespace "subpath-3202" to be "success or failure"
+Jun 24 16:03:03.928: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07247ms
+Jun 24 16:03:05.932: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 2.010204467s
+Jun 24 16:03:07.937: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 4.014468202s
+Jun 24 16:03:09.941: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 6.018559929s
+Jun 24 16:03:11.945: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 8.022665605s
+Jun 24 16:03:13.949: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 10.027155328s
+Jun 24 16:03:15.964: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 12.042160319s
+Jun 24 16:03:17.968: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 14.04616342s
+Jun 24 16:03:19.973: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 16.050456308s
+Jun 24 16:03:21.977: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 18.054532135s
+Jun 24 16:03:23.982: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Running", Reason="", readiness=true. Elapsed: 20.059710518s
+Jun 24 16:03:25.986: INFO: Pod "pod-subpath-test-configmap-mjdz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.06373029s
+STEP: Saw pod success
+Jun 24 16:03:25.986: INFO: Pod "pod-subpath-test-configmap-mjdz" satisfied condition "success or failure"
+Jun 24 16:03:25.992: INFO: Trying to get logs from node minion pod pod-subpath-test-configmap-mjdz container test-container-subpath-configmap-mjdz: 
+STEP: delete the pod
+Jun 24 16:03:26.017: INFO: Waiting for pod pod-subpath-test-configmap-mjdz to disappear
+Jun 24 16:03:26.019: INFO: Pod pod-subpath-test-configmap-mjdz no longer exists
+STEP: Deleting pod pod-subpath-test-configmap-mjdz
+Jun 24 16:03:26.019: INFO: Deleting pod "pod-subpath-test-configmap-mjdz" in namespace "subpath-3202"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 16:03:26.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-3202" for this suite.
+Jun 24 16:03:32.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 16:03:32.139: INFO: namespace subpath-3202 deletion completed in 6.112955013s
+
+• [SLOW TEST:28.265 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Guestbook application 
+  should create and stop a working application  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 16:03:32.139: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should create and stop a working application  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating all guestbook components
+Jun 24 16:03:32.170: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-slave
+  labels:
+    app: redis
+    role: slave
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+  selector:
+    app: redis
+    role: slave
+    tier: backend
+
+Jun 24 16:03:32.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:32.446: INFO: stderr: ""
+Jun 24 16:03:32.446: INFO: stdout: "service/redis-slave created\n"
+Jun 24 16:03:32.446: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: redis-master
+  labels:
+    app: redis
+    role: master
+    tier: backend
+spec:
+  ports:
+  - port: 6379
+    targetPort: 6379
+  selector:
+    app: redis
+    role: master
+    tier: backend
+
+Jun 24 16:03:32.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:32.725: INFO: stderr: ""
+Jun 24 16:03:32.725: INFO: stdout: "service/redis-master created\n"
+Jun 24 16:03:32.725: INFO: apiVersion: v1
+kind: Service
+metadata:
+  name: frontend
+  labels:
+    app: guestbook
+    tier: frontend
+spec:
+  # if your cluster supports it, uncomment the following to automatically create
+  # an external load-balanced IP for the frontend service.
+  # type: LoadBalancer
+  ports:
+  - port: 80
+  selector:
+    app: guestbook
+    tier: frontend
+
+Jun 24 16:03:32.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:32.978: INFO: stderr: ""
+Jun 24 16:03:32.978: INFO: stdout: "service/frontend created\n"
+Jun 24 16:03:32.978: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: frontend
+spec:
+  replicas: 3
+  selector:
+    matchLabels:
+      app: guestbook
+      tier: frontend
+  template:
+    metadata:
+      labels:
+        app: guestbook
+        tier: frontend
+    spec:
+      containers:
+      - name: php-redis
+        image: gcr.io/google-samples/gb-frontend:v6
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access environment variables to find service host
+          # info, comment out the 'value: dns' line above, and uncomment the
+          # line below:
+          # value: env
+        ports:
+        - containerPort: 80
+
+Jun 24 16:03:32.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:33.225: INFO: stderr: ""
+Jun 24 16:03:33.225: INFO: stdout: "deployment.apps/frontend created\n"
+Jun 24 16:03:33.225: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: redis-master
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      app: redis
+      role: master
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: master
+        tier: backend
+    spec:
+      containers:
+      - name: master
+        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        ports:
+        - containerPort: 6379
+
+Jun 24 16:03:33.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:33.490: INFO: stderr: ""
+Jun 24 16:03:33.490: INFO: stdout: "deployment.apps/redis-master created\n"
+Jun 24 16:03:33.490: INFO: apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: redis-slave
+spec:
+  replicas: 2
+  selector:
+    matchLabels:
+      app: redis
+      role: slave
+      tier: backend
+  template:
+    metadata:
+      labels:
+        app: redis
+        role: slave
+        tier: backend
+    spec:
+      containers:
+      - name: slave
+        image: gcr.io/google-samples/gb-redisslave:v3
+        resources:
+          requests:
+            cpu: 100m
+            memory: 100Mi
+        env:
+        - name: GET_HOSTS_FROM
+          value: dns
+          # If your cluster config does not include a dns service, then to
+          # instead access an environment variable to find the master
+          # service's host, comment out the 'value: dns' line above, and
+          # uncomment the line below:
+          # value: env
+        ports:
+        - containerPort: 6379
+
+Jun 24 16:03:33.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-4144'
+Jun 24 16:03:33.733: INFO: stderr: ""
+Jun 24 16:03:33.733: INFO: stdout: "deployment.apps/redis-slave created\n"
+STEP: validating guestbook app
+Jun 24 16:03:33.733: INFO: Waiting for all frontend pods to be Running.
+Jun 24 16:03:48.785: INFO: Waiting for frontend to serve content.
+Jun 24 16:03:49.816: INFO: Failed to get response from guestbook. err: , response: 
+Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 +Stack trace: +#0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) +#1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) +#2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) +#3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() +#4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() +#5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
+ +Jun 24 16:03:54.840: INFO: Trying to add a new entry to the guestbook. +Jun 24 16:03:54.861: INFO: Verifying that added entry can be retrieved. +STEP: using delete to clean up resources +Jun 24 16:03:54.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.005: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.005: INFO: stdout: "service \"redis-slave\" force deleted\n" +STEP: using delete to clean up resources +Jun 24 16:03:55.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.139: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.139: INFO: stdout: "service \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 24 16:03:55.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.248: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.248: INFO: stdout: "service \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 24 16:03:55.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.357: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" +STEP: using delete to clean up resources +Jun 24 16:03:55.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.454: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" +STEP: using delete to clean up resources +Jun 24 16:03:55.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-4144' +Jun 24 16:03:55.546: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" +Jun 24 16:03:55.547: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:03:55.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4144" for this suite. +Jun 24 16:04:33.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:04:33.648: INFO: namespace kubectl-4144 deletion completed in 38.098654296s + +• [SLOW TEST:61.509 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Guestbook application + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create and stop a working application [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-network] Networking Granular Checks: Pods + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:04:33.649: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pod-network-test +STEP: Waiting for a default service account to be provisioned in namespace +[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Performing setup for networking test in namespace pod-network-test-6597 +STEP: creating a selector +STEP: Creating the service pods in kubernetes +Jun 24 16:04:33.678: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable +STEP: Creating test pods +Jun 24 16:04:51.762: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.251.128.6:8080/dial?request=hostName&protocol=http&host=10.251.128.5&port=8080&tries=1'] Namespace:pod-network-test-6597 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} +Jun 24 16:04:51.762: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +Jun 24 16:04:51.966: INFO: Waiting for endpoints: map[] +[AfterEach] [sig-network] Networking + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:04:51.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pod-network-test-6597" for this suite. +Jun 24 16:05:13.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:05:14.084: INFO: namespace pod-network-test-6597 deletion completed in 22.113813569s + +• [SLOW TEST:40.435 seconds] +[sig-network] Networking +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 + Granular Checks: Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 + should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:05:14.089: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name s-test-opt-del-d9e44754-9699-11e9-8bcb-526dc0a539dd +STEP: Creating secret with name s-test-opt-upd-d9e447a3-9699-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-d9e44754-9699-11e9-8bcb-526dc0a539dd +STEP: Updating secret s-test-opt-upd-d9e447a3-9699-11e9-8bcb-526dc0a539dd +STEP: Creating secret with name s-test-opt-create-d9e447c6-9699-11e9-8bcb-526dc0a539dd +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:05:20.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-199" for this suite. +Jun 24 16:05:42.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:05:42.371: INFO: namespace secrets-199 deletion completed in 22.098571597s + +• [SLOW TEST:28.282 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:05:42.371: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Cleaning up the secret +STEP: Cleaning up the configmap +STEP: Cleaning up the pod +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:05:46.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-9756" for this suite. +Jun 24 16:05:52.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:05:52.581: INFO: namespace emptydir-wrapper-9756 deletion completed in 6.101664816s + +• [SLOW TEST:10.211 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not conflict [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:05:52.582: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: closing the watch once it receives two notifications +Jun 24 16:05:52.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-watch-closed,UID:f0d6e395-9699-11e9-b70d-fa163ef83c94,ResourceVersion:7492,Generation:0,CreationTimestamp:2019-06-24 16:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 24 16:05:52.639: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-watch-closed,UID:f0d6e395-9699-11e9-b70d-fa163ef83c94,ResourceVersion:7493,Generation:0,CreationTimestamp:2019-06-24 16:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time, while the watch is closed +STEP: creating a new watch on configmaps from the last resource version observed by the first watch +STEP: deleting the configmap +STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed +Jun 24 16:05:52.654: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-watch-closed,UID:f0d6e395-9699-11e9-b70d-fa163ef83c94,ResourceVersion:7494,Generation:0,CreationTimestamp:2019-06-24 16:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 24 16:05:52.655: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5654,SelfLink:/api/v1/namespaces/watch-5654/configmaps/e2e-watch-test-watch-closed,UID:f0d6e395-9699-11e9-b70d-fa163ef83c94,ResourceVersion:7495,Generation:0,CreationTimestamp:2019-06-24 16:05:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:05:52.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-5654" for this suite. +Jun 24 16:05:58.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:05:58.757: INFO: namespace watch-5654 deletion completed in 6.099840946s + +• [SLOW TEST:6.176 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should be able to restart watching from the last resource version observed by the previous watch [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:05:58.758: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-f483a055-9699-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:05:58.805: INFO: Waiting up to 5m0s for pod "pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd" in namespace "secrets-7085" to be "success or failure" +Jun 24 16:05:58.807: INFO: Pod "pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440539ms +Jun 24 16:06:00.812: INFO: Pod "pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd": Phase="Running", Reason="", readiness=true. Elapsed: 2.006695084s +Jun 24 16:06:02.816: INFO: Pod "pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010681s +STEP: Saw pod success +Jun 24 16:06:02.816: INFO: Pod "pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:06:02.819: INFO: Trying to get logs from node minion pod pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd container secret-volume-test: +STEP: delete the pod +Jun 24 16:06:02.862: INFO: Waiting for pod pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:06:02.864: INFO: Pod pod-secrets-f48435d2-9699-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:06:02.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7085" for this suite. +Jun 24 16:06:08.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:06:08.959: INFO: namespace secrets-7085 deletion completed in 6.088443825s + +• [SLOW TEST:10.201 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:06:08.959: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 24 16:06:09.007: INFO: Waiting up to 5m0s for pod "pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd" in namespace "emptydir-3666" to be "success or failure" +Jun 24 16:06:09.011: INFO: Pod "pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.651673ms +Jun 24 16:06:11.016: INFO: Pod "pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008718382s +STEP: Saw pod success +Jun 24 16:06:11.016: INFO: Pod "pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:06:11.019: INFO: Trying to get logs from node minion pod pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:06:11.041: INFO: Waiting for pod pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:06:11.044: INFO: Pod pod-fa973d3f-9699-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:06:11.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-3666" for this suite. +Jun 24 16:06:17.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:06:17.154: INFO: namespace emptydir-3666 deletion completed in 6.106549778s + +• [SLOW TEST:8.195 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:06:17.160: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 24 16:06:17.209: INFO: Waiting up to 5m0s for pod "pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd" in namespace "emptydir-4212" to be "success or failure" +Jun 24 16:06:17.218: INFO: Pod "pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.19459ms +Jun 24 16:06:19.222: INFO: Pod "pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013042374s +STEP: Saw pod success +Jun 24 16:06:19.222: INFO: Pod "pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:06:19.226: INFO: Trying to get logs from node minion pod pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:06:19.271: INFO: Waiting for pod pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:06:19.276: INFO: Pod pod-ff7c273a-9699-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:06:19.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4212" for this suite. +Jun 24 16:06:25.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:06:25.401: INFO: namespace emptydir-4212 deletion completed in 6.120353227s + +• [SLOW TEST:8.241 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:06:25.402: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:06:25.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd" in namespace "downward-api-6677" to be "success or failure" +Jun 24 16:06:25.443: INFO: Pod "downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443238ms +Jun 24 16:06:27.447: INFO: Pod "downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007697854s +STEP: Saw pod success +Jun 24 16:06:27.447: INFO: Pod "downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:06:27.453: INFO: Trying to get logs from node minion pod downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:06:27.489: INFO: Waiting for pod downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:06:27.493: INFO: Pod downwardapi-volume-046484b7-969a-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:06:27.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6677" for this suite. +Jun 24 16:06:33.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:06:33.589: INFO: namespace downward-api-6677 deletion completed in 6.089997073s + +• [SLOW TEST:8.187 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:06:33.589: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods +STEP: Gathering metrics +Jun 24 16:07:13.693: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: + [quantile=0.5] = 8 + [quantile=0.9] = 40 + [quantile=0.99] = 63 +For garbage_collector_attempt_to_delete_work_duration: + [quantile=0.5] = 20993 + [quantile=0.9] = 216179 + [quantile=0.99] = 231418 +For garbage_collector_attempt_to_orphan_queue_latency: + [quantile=0.5] = 18 + [quantile=0.9] = 18 + [quantile=0.99] = 18 +For garbage_collector_attempt_to_orphan_work_duration: + [quantile=0.5] = 263792 + [quantile=0.9] = 263792 + [quantile=0.99] = 263792 +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: + [quantile=0.5] = 5 + [quantile=0.9] = 8 + [quantile=0.99] = 44 +For garbage_collector_graph_changes_work_duration: + [quantile=0.5] = 15 + [quantile=0.9] = 31 + [quantile=0.99] = 63 +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: + [quantile=0.5] = 16 + [quantile=0.9] = 38 + [quantile=0.99] = 45 +For namespace_queue_latency_sum: + [] = 3560 +For namespace_queue_latency_count: + [] = 182 +For namespace_retries: + [] = 184 +For namespace_work_duration: + [quantile=0.5] = 168353 + [quantile=0.9] = 248812 + [quantile=0.99] = 437209 +For namespace_work_duration_sum: + [] = 27578094 +For namespace_work_duration_count: + [] = 182 +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:07:13.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6787" for this suite. +Jun 24 16:07:19.708: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:07:19.793: INFO: namespace gc-6787 deletion completed in 6.095722964s + +• [SLOW TEST:46.203 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan pods created by rc if delete options say so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:07:19.793: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-24d0ed29-969a-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:07:19.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd" in namespace "projected-7326" to be "success or failure" +Jun 24 16:07:19.848: INFO: Pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.546134ms +Jun 24 16:07:21.852: INFO: Pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010631027s +Jun 24 16:07:23.856: INFO: Pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014642169s +Jun 24 16:07:25.860: INFO: Pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018688158s +STEP: Saw pod success +Jun 24 16:07:25.861: INFO: Pod "pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:07:25.864: INFO: Trying to get logs from node minion pod pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd container projected-secret-volume-test: +STEP: delete the pod +Jun 24 16:07:25.888: INFO: Waiting for pod pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:07:25.892: INFO: Pod pod-projected-secrets-24d182b9-969a-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:07:25.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-7326" for this suite. +Jun 24 16:07:31.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:07:31.997: INFO: namespace projected-7326 deletion completed in 6.102473015s + +• [SLOW TEST:12.204 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:07:32.001: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:07:32.044: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd" in namespace "downward-api-603" to be "success or failure" +Jun 24 16:07:32.050: INFO: Pod "downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.560829ms +Jun 24 16:07:34.054: INFO: Pod "downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010018667s +Jun 24 16:07:36.059: INFO: Pod "downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0145361s +STEP: Saw pod success +Jun 24 16:07:36.059: INFO: Pod "downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:07:36.063: INFO: Trying to get logs from node minion pod downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:07:36.105: INFO: Waiting for pod downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:07:36.110: INFO: Pod downwardapi-volume-2c1730f3-969a-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:07:36.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-603" for this suite. +Jun 24 16:07:42.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:07:42.227: INFO: namespace downward-api-603 deletion completed in 6.110204243s + +• [SLOW TEST:10.225 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:07:42.227: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-1428 +[It] should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 24 16:07:42.280: INFO: Found 0 stateful pods, waiting for 3 +Jun 24 16:07:52.286: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 16:07:52.286: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 16:07:52.286: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 24 16:07:52.322: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Not applying an update when the partition is greater than the number of replicas +STEP: Performing a canary update +Jun 24 16:08:02.370: INFO: Updating stateful set ss2 +Jun 24 16:08:02.383: INFO: Waiting for Pod statefulset-1428/ss2-2 to have revision ss2-c79899b9 update revision ss2-787997d666 +STEP: Restoring Pods to the correct revision when they are deleted +Jun 24 16:08:12.466: INFO: Found 2 stateful pods, waiting for 3 +Jun 24 16:08:22.479: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 16:08:22.479: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 16:08:22.479: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +STEP: Performing a phased rolling update +Jun 24 16:08:22.505: INFO: Updating stateful set ss2 +Jun 24 16:08:22.518: INFO: Waiting for Pod statefulset-1428/ss2-1 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 24 16:08:32.548: INFO: Updating stateful set ss2 +Jun 24 16:08:32.559: INFO: Waiting for StatefulSet statefulset-1428/ss2 to complete update +Jun 24 16:08:32.559: INFO: Waiting for Pod statefulset-1428/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 24 16:08:42.566: INFO: Deleting all statefulset in ns statefulset-1428 +Jun 24 16:08:42.570: INFO: Scaling statefulset ss2 to 0 +Jun 24 16:08:52.593: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 24 16:08:52.597: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:08:52.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1428" for this suite. +Jun 24 16:08:58.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:08:58.737: INFO: namespace statefulset-1428 deletion completed in 6.120604308s + +• [SLOW TEST:76.510 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform canary updates and phased rolling updates of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[k8s.io] Probing container + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:08:58.737: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:09:58.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7351" for this suite. +Jun 24 16:10:20.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:10:20.896: INFO: namespace container-probe-7351 deletion completed in 22.112328916s + +• [SLOW TEST:82.159 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:10:20.896: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:10:20.926: INFO: Creating deployment "test-recreate-deployment" +Jun 24 16:10:20.929: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 +Jun 24 16:10:20.939: INFO: deployment "test-recreate-deployment" doesn't have the required revision set +Jun 24 16:10:22.947: INFO: Waiting deployment "test-recreate-deployment" to complete +Jun 24 16:10:22.951: INFO: Triggering a new rollout for deployment "test-recreate-deployment" +Jun 24 16:10:22.960: INFO: Updating deployment test-recreate-deployment +Jun 24 16:10:22.960: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 24 16:10:23.054: INFO: Deployment "test-recreate-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7341,SelfLink:/apis/apps/v1/namespaces/deployment-7341/deployments/test-recreate-deployment,UID:90c1f484-969a-11e9-b70d-fa163ef83c94,ResourceVersion:8580,Generation:2,CreationTimestamp:2019-06-24 16:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2019-06-24 16:10:23 +0000 UTC 2019-06-24 16:10:23 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2019-06-24 16:10:23 +0000 UTC 2019-06-24 16:10:20 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-c9cbd8684" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 24 16:10:23.059: INFO: New ReplicaSet "test-recreate-deployment-c9cbd8684" of Deployment "test-recreate-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684,GenerateName:,Namespace:deployment-7341,SelfLink:/apis/apps/v1/namespaces/deployment-7341/replicasets/test-recreate-deployment-c9cbd8684,UID:91ff50a5-969a-11e9-b70d-fa163ef83c94,ResourceVersion:8577,Generation:1,CreationTimestamp:2019-06-24 16:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 90c1f484-969a-11e9-b70d-fa163ef83c94 0xc002d40650 0xc002d40651}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:10:23.059: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": +Jun 24 16:10:23.059: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-7d57d5ff7c,GenerateName:,Namespace:deployment-7341,SelfLink:/apis/apps/v1/namespaces/deployment-7341/replicasets/test-recreate-deployment-7d57d5ff7c,UID:90c27708-969a-11e9-b70d-fa163ef83c94,ResourceVersion:8567,Generation:2,CreationTimestamp:2019-06-24 16:10:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 90c1f484-969a-11e9-b70d-fa163ef83c94 0xc002d40597 0xc002d40598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 7d57d5ff7c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:10:23.066: INFO: Pod "test-recreate-deployment-c9cbd8684-cf2wm" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-c9cbd8684-cf2wm,GenerateName:test-recreate-deployment-c9cbd8684-,Namespace:deployment-7341,SelfLink:/api/v1/namespaces/deployment-7341/pods/test-recreate-deployment-c9cbd8684-cf2wm,UID:91ffc6c4-969a-11e9-b70d-fa163ef83c94,ResourceVersion:8579,Generation:0,CreationTimestamp:2019-06-24 16:10:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: c9cbd8684,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-c9cbd8684 91ff50a5-969a-11e9-b70d-fa163ef83c94 0xc002d414e0 0xc002d414e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-rk6qt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-rk6qt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-rk6qt true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d41590} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d415b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:10:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:10:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:10:23 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:10:23 +0000 UTC }],Message:,Reason:,HostIP:10.1.0.12,PodIP:,StartTime:2019-06-24 16:10:23 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:10:23.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-7341" for this suite. +Jun 24 16:10:29.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:10:29.188: INFO: namespace deployment-7341 deletion completed in 6.11685244s + +• [SLOW TEST:8.292 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RecreateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:10:29.188: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting an echo server on multiple ports +STEP: creating replication controller proxy-service-blngs in namespace proxy-5572 +I0624 16:10:29.240985 20 runners.go:184] Created replication controller with name: proxy-service-blngs, namespace: proxy-5572, replica count: 1 +I0624 16:10:30.291555 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0624 16:10:31.291939 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0624 16:10:32.292346 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0624 16:10:33.292798 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:34.293181 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:35.293541 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:36.293867 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:37.294152 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:38.294500 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady +I0624 16:10:39.294825 20 runners.go:184] proxy-service-blngs Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 24 16:10:39.298: INFO: setup took 10.080438632s, starting test cases +STEP: running 16 cases, 20 attempts per case, 320 total attempts +Jun 24 16:10:39.319: INFO: (0) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/:
... (200; 20.537774ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 21.517444ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 21.607939ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 21.333591ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 21.70163ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 21.282773ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 21.480069ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 21.52677ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 21.905001ms) +Jun 24 16:10:39.320: INFO: (0) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 22.265085ms) +Jun 24 16:10:39.322: INFO: (0) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 23.919892ms) +Jun 24 16:10:39.336: INFO: (0) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 38.494572ms) +Jun 24 16:10:39.348: INFO: (0) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 50.078696ms) +Jun 24 16:10:39.349: INFO: (0) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 50.479517ms) +Jun 24 16:10:39.351: INFO: (0) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test<... (200; 11.265597ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 11.508148ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 11.433362ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 11.207725ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 11.202508ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 11.228609ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 11.335879ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 11.234775ms) +Jun 24 16:10:39.387: INFO: (1) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 11.354767ms) +Jun 24 16:10:39.389: INFO: (1) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 13.668264ms) +Jun 24 16:10:39.389: INFO: (1) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 13.560476ms) +Jun 24 16:10:39.389: INFO: (1) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 13.552088ms) +Jun 24 16:10:39.389: INFO: (1) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 13.710415ms) +Jun 24 16:10:39.396: INFO: (2) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 6.044522ms) +Jun 24 16:10:39.396: INFO: (2) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 6.224434ms) +Jun 24 16:10:39.396: INFO: (2) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 6.625488ms) +Jun 24 16:10:39.396: INFO: (2) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 6.100337ms) +Jun 24 16:10:39.397: INFO: (2) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 6.700671ms) +Jun 24 16:10:39.397: INFO: (2) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 12.532943ms) +Jun 24 16:10:39.404: INFO: (2) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 14.061939ms) +Jun 24 16:10:39.405: INFO: (2) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 14.832643ms) +Jun 24 16:10:39.412: INFO: (3) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 6.093189ms) +Jun 24 16:10:39.412: INFO: (3) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 6.37184ms) +Jun 24 16:10:39.412: INFO: (3) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 9.167536ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 9.330442ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 9.474045ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 9.50202ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.434418ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.339043ms) +Jun 24 16:10:39.415: INFO: (3) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 9.650717ms) +Jun 24 16:10:39.416: INFO: (3) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 9.882236ms) +Jun 24 16:10:39.418: INFO: (3) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 11.380048ms) +Jun 24 16:10:39.425: INFO: (4) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 6.281557ms) +Jun 24 16:10:39.426: INFO: (4) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 6.040229ms) +Jun 24 16:10:39.426: INFO: (4) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 7.208757ms) +Jun 24 16:10:39.426: INFO: (4) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 7.364948ms) +Jun 24 16:10:39.429: INFO: (4) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.108642ms) +Jun 24 16:10:39.429: INFO: (4) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 9.259966ms) +Jun 24 16:10:39.429: INFO: (4) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 10.618863ms) +Jun 24 16:10:39.430: INFO: (4) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 9.893447ms) +Jun 24 16:10:39.433: INFO: (4) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 13.466819ms) +Jun 24 16:10:39.433: INFO: (4) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 13.47068ms) +Jun 24 16:10:39.433: INFO: (4) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 13.480202ms) +Jun 24 16:10:39.434: INFO: (4) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 13.885944ms) +Jun 24 16:10:39.434: INFO: (4) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 13.937208ms) +Jun 24 16:10:39.435: INFO: (4) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 5.777582ms) +Jun 24 16:10:39.443: INFO: (5) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 7.086868ms) +Jun 24 16:10:39.443: INFO: (5) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 7.508936ms) +Jun 24 16:10:39.444: INFO: (5) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 8.718451ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 10.509525ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.918749ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 10.13868ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 9.997108ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 10.042517ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test<... (200; 10.484626ms) +Jun 24 16:10:39.446: INFO: (5) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 10.123586ms) +Jun 24 16:10:39.448: INFO: (5) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 12.352286ms) +Jun 24 16:10:39.448: INFO: (5) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 12.356638ms) +Jun 24 16:10:39.448: INFO: (5) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 12.050724ms) +Jun 24 16:10:39.448: INFO: (5) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 12.114133ms) +Jun 24 16:10:39.457: INFO: (6) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 8.681295ms) +Jun 24 16:10:39.457: INFO: (6) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 8.991697ms) +Jun 24 16:10:39.457: INFO: (6) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 9.143971ms) +Jun 24 16:10:39.457: INFO: (6) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.103836ms) +Jun 24 16:10:39.463: INFO: (6) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 14.320599ms) +Jun 24 16:10:39.463: INFO: (6) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 14.518903ms) +Jun 24 16:10:39.463: INFO: (6) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 14.601516ms) +Jun 24 16:10:39.463: INFO: (6) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 14.534883ms) +Jun 24 16:10:39.466: INFO: (6) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 17.138995ms) +Jun 24 16:10:39.466: INFO: (6) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 17.365252ms) +Jun 24 16:10:39.466: INFO: (6) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 16.953148ms) +Jun 24 16:10:39.473: INFO: (7) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 7.027539ms) +Jun 24 16:10:39.477: INFO: (7) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 11.533198ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 11.226335ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 11.503209ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 11.450627ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 11.414852ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 11.125342ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test (200; 11.584418ms) +Jun 24 16:10:39.478: INFO: (7) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 11.740415ms) +Jun 24 16:10:39.479: INFO: (7) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 12.644678ms) +Jun 24 16:10:39.480: INFO: (7) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 14.339913ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 7.621395ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 7.204119ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 7.859658ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 7.507943ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 7.370767ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 7.359513ms) +Jun 24 16:10:39.488: INFO: (8) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 7.873753ms) +Jun 24 16:10:39.489: INFO: (8) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 11.405084ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 12.246719ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 12.242563ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 12.23512ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 12.297745ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 12.278512ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 12.400508ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 12.557859ms) +Jun 24 16:10:39.506: INFO: (9) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 12.803833ms) +Jun 24 16:10:39.507: INFO: (9) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 14.040837ms) +Jun 24 16:10:39.513: INFO: (10) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 4.867477ms) +Jun 24 16:10:39.513: INFO: (10) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 5.126211ms) +Jun 24 16:10:39.514: INFO: (10) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 6.434685ms) +Jun 24 16:10:39.514: INFO: (10) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 6.239018ms) +Jun 24 16:10:39.515: INFO: (10) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 7.182091ms) +Jun 24 16:10:39.515: INFO: (10) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 6.094337ms) +Jun 24 16:10:39.515: INFO: (10) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 6.550151ms) +Jun 24 16:10:39.515: INFO: (10) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 6.477646ms) +Jun 24 16:10:39.515: INFO: (10) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 6.228284ms) +Jun 24 16:10:39.518: INFO: (10) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 9.811498ms) +Jun 24 16:10:39.518: INFO: (10) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 9.762486ms) +Jun 24 16:10:39.519: INFO: (10) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 10.552907ms) +Jun 24 16:10:39.520: INFO: (10) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 10.982607ms) +Jun 24 16:10:39.520: INFO: (10) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 11.477247ms) +Jun 24 16:10:39.520: INFO: (10) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 11.431067ms) +Jun 24 16:10:39.520: INFO: (10) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 3.792294ms) +Jun 24 16:10:39.524: INFO: (11) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 4.189531ms) +Jun 24 16:10:39.525: INFO: (11) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 4.93209ms) +Jun 24 16:10:39.526: INFO: (11) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 5.853249ms) +Jun 24 16:10:39.527: INFO: (11) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 5.954097ms) +Jun 24 16:10:39.527: INFO: (11) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 6.627002ms) +Jun 24 16:10:39.532: INFO: (11) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 11.239394ms) +Jun 24 16:10:39.532: INFO: (11) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 11.300257ms) +Jun 24 16:10:39.532: INFO: (11) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 11.427904ms) +Jun 24 16:10:39.532: INFO: (11) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 12.333572ms) +Jun 24 16:10:39.532: INFO: (11) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 11.816546ms) +Jun 24 16:10:39.533: INFO: (11) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 11.587168ms) +Jun 24 16:10:39.533: INFO: (11) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 11.206511ms) +Jun 24 16:10:39.533: INFO: (11) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test<... (200; 8.902046ms) +Jun 24 16:10:39.544: INFO: (12) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 8.996164ms) +Jun 24 16:10:39.544: INFO: (12) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 8.940719ms) +Jun 24 16:10:39.544: INFO: (12) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 9.061389ms) +Jun 24 16:10:39.544: INFO: (12) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 9.281367ms) +Jun 24 16:10:39.545: INFO: (12) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.629199ms) +Jun 24 16:10:39.545: INFO: (12) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 9.928123ms) +Jun 24 16:10:39.546: INFO: (12) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 10.998044ms) +Jun 24 16:10:39.546: INFO: (12) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 10.941883ms) +Jun 24 16:10:39.546: INFO: (12) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 11.036172ms) +Jun 24 16:10:39.546: INFO: (12) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 10.962963ms) +Jun 24 16:10:39.556: INFO: (13) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 8.659538ms) +Jun 24 16:10:39.556: INFO: (13) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 8.633883ms) +Jun 24 16:10:39.556: INFO: (13) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 8.735888ms) +Jun 24 16:10:39.556: INFO: (13) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 8.813278ms) +Jun 24 16:10:39.556: INFO: (13) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 9.04371ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 12.862918ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 13.033784ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 13.074701ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 13.356679ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 13.407875ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 13.510631ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 13.426327ms) +Jun 24 16:10:39.560: INFO: (13) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 13.666212ms) +Jun 24 16:10:39.562: INFO: (13) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 15.004013ms) +Jun 24 16:10:39.562: INFO: (13) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 14.999226ms) +Jun 24 16:10:39.571: INFO: (14) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 9.028042ms) +Jun 24 16:10:39.573: INFO: (14) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.536849ms) +Jun 24 16:10:39.573: INFO: (14) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.800917ms) +Jun 24 16:10:39.573: INFO: (14) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 10.267933ms) +Jun 24 16:10:39.573: INFO: (14) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.911218ms) +Jun 24 16:10:39.573: INFO: (14) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 9.573741ms) +Jun 24 16:10:39.574: INFO: (14) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 11.363566ms) +Jun 24 16:10:39.574: INFO: (14) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test (200; 12.575911ms) +Jun 24 16:10:39.576: INFO: (14) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 13.131397ms) +Jun 24 16:10:39.576: INFO: (14) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 12.566391ms) +Jun 24 16:10:39.576: INFO: (14) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 12.610102ms) +Jun 24 16:10:39.583: INFO: (15) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 6.749967ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 9.270831ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 9.542754ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 9.851606ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 10.25748ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 10.150779ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 10.65628ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 10.47395ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 7.801694ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 9.991788ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 9.788528ms) +Jun 24 16:10:39.587: INFO: (15) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test<... (200; 9.240953ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 7.042265ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 8.700447ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.584497ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 8.852338ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: ... (200; 9.159889ms) +Jun 24 16:10:39.599: INFO: (16) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 8.997431ms) +Jun 24 16:10:39.600: INFO: (16) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 5.854661ms) +Jun 24 16:10:39.600: INFO: (16) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 5.946663ms) +Jun 24 16:10:39.601: INFO: (16) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 11.826419ms) +Jun 24 16:10:39.601: INFO: (16) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 11.494671ms) +Jun 24 16:10:39.601: INFO: (16) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 8.733944ms) +Jun 24 16:10:39.601: INFO: (16) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 9.036281ms) +Jun 24 16:10:39.611: INFO: (17) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:462/proxy/: tls qux (200; 9.193346ms) +Jun 24 16:10:39.611: INFO: (17) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 9.249301ms) +Jun 24 16:10:39.611: INFO: (17) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 9.199967ms) +Jun 24 16:10:39.611: INFO: (17) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 9.276044ms) +Jun 24 16:10:39.612: INFO: (17) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 10.127927ms) +Jun 24 16:10:39.614: INFO: (17) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 12.48275ms) +Jun 24 16:10:39.614: INFO: (17) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 12.635803ms) +Jun 24 16:10:39.614: INFO: (17) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test<... (200; 8.32243ms) +Jun 24 16:10:39.626: INFO: (18) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 8.280213ms) +Jun 24 16:10:39.626: INFO: (18) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 8.402074ms) +Jun 24 16:10:39.626: INFO: (18) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d/proxy/: test (200; 8.193804ms) +Jun 24 16:10:39.627: INFO: (18) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname2/proxy/: tls qux (200; 8.790222ms) +Jun 24 16:10:39.628: INFO: (18) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 9.549403ms) +Jun 24 16:10:39.629: INFO: (18) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname1/proxy/: foo (200; 10.347218ms) +Jun 24 16:10:39.629: INFO: (18) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 10.147388ms) +Jun 24 16:10:39.629: INFO: (18) /api/v1/namespaces/proxy-5572/services/https:proxy-service-blngs:tlsportname1/proxy/: tls baz (200; 10.585599ms) +Jun 24 16:10:39.629: INFO: (18) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 10.585392ms) +Jun 24 16:10:39.635: INFO: (19) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:460/proxy/: tls baz (200; 5.56397ms) +Jun 24 16:10:39.635: INFO: (19) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:160/proxy/: foo (200; 5.632207ms) +Jun 24 16:10:39.636: INFO: (19) /api/v1/namespaces/proxy-5572/pods/proxy-service-blngs-zqd8d:1080/proxy/: test<... (200; 7.191049ms) +Jun 24 16:10:39.639: INFO: (19) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:162/proxy/: bar (200; 9.695512ms) +Jun 24 16:10:39.639: INFO: (19) /api/v1/namespaces/proxy-5572/pods/https:proxy-service-blngs-zqd8d:443/proxy/: test (200; 11.30506ms) +Jun 24 16:10:39.641: INFO: (19) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:1080/proxy/: ... (200; 5.713149ms) +Jun 24 16:10:39.641: INFO: (19) /api/v1/namespaces/proxy-5572/services/http:proxy-service-blngs:portname2/proxy/: bar (200; 11.324573ms) +Jun 24 16:10:39.641: INFO: (19) /api/v1/namespaces/proxy-5572/pods/http:proxy-service-blngs-zqd8d:160/proxy/: foo (200; 11.319362ms) +Jun 24 16:10:39.641: INFO: (19) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname2/proxy/: bar (200; 11.720599ms) +Jun 24 16:10:39.641: INFO: (19) /api/v1/namespaces/proxy-5572/services/proxy-service-blngs:portname1/proxy/: foo (200; 11.71986ms) +STEP: deleting ReplicationController proxy-service-blngs in namespace proxy-5572, will wait for the garbage collector to delete the pods +Jun 24 16:10:39.700: INFO: Deleting ReplicationController proxy-service-blngs took: 7.050569ms +Jun 24 16:10:40.001: INFO: Terminating ReplicationController proxy-service-blngs pods took: 300.368086ms +[AfterEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:10:46.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "proxy-5572" for this suite. +Jun 24 16:10:52.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:10:52.903: INFO: namespace proxy-5572 deletion completed in 6.097163857s + +• [SLOW TEST:23.714 seconds] +[sig-network] Proxy +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:56 + should proxy through a service and a pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:10:52.903: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:10:52.936: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:10:56.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-7516" for this suite. +Jun 24 16:11:38.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:11:39.078: INFO: namespace pods-7516 deletion completed in 42.100417192s + +• [SLOW TEST:46.175 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support retrieving logs from the container over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:11:39.080: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:11:39.132: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd" in namespace "downward-api-7362" to be "success or failure" +Jun 24 16:11:39.136: INFO: Pod "downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029469ms +Jun 24 16:11:41.140: INFO: Pod "downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008068359s +Jun 24 16:11:43.144: INFO: Pod "downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012122296s +STEP: Saw pod success +Jun 24 16:11:43.144: INFO: Pod "downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:11:43.148: INFO: Trying to get logs from node minion pod downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:11:43.177: INFO: Waiting for pod downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:11:43.180: INFO: Pod downwardapi-volume-bf5dcef3-969a-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:11:43.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7362" for this suite. +Jun 24 16:11:49.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:11:49.297: INFO: namespace downward-api-7362 deletion completed in 6.113161975s + +• [SLOW TEST:10.217 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] ConfigMap + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:11:49.298: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-upd-c5758c80-969a-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Updating configmap configmap-test-upd-c5758c80-969a-11e9-8bcb-526dc0a539dd +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:13:01.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-8656" for this suite. +Jun 24 16:13:23.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:13:23.991: INFO: namespace configmap-8656 deletion completed in 22.10353588s + +• [SLOW TEST:94.694 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:13:23.993: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on tmpfs +Jun 24 16:13:24.040: INFO: Waiting up to 5m0s for pod "pod-fde5da67-969a-11e9-8bcb-526dc0a539dd" in namespace "emptydir-4778" to be "success or failure" +Jun 24 16:13:24.043: INFO: Pod "pod-fde5da67-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.442763ms +Jun 24 16:13:26.048: INFO: Pod "pod-fde5da67-969a-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007624174s +Jun 24 16:13:28.052: INFO: Pod "pod-fde5da67-969a-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012027751s +STEP: Saw pod success +Jun 24 16:13:28.052: INFO: Pod "pod-fde5da67-969a-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:13:28.056: INFO: Trying to get logs from node minion pod pod-fde5da67-969a-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:13:28.088: INFO: Waiting for pod pod-fde5da67-969a-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:13:28.090: INFO: Pod pod-fde5da67-969a-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:13:28.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-4778" for this suite. +Jun 24 16:13:34.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:13:34.192: INFO: namespace emptydir-4778 deletion completed in 6.098957701s + +• [SLOW TEST:10.199 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-storage] Projected secret + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:13:34.192: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name s-test-opt-del-03f917ca-969b-11e9-8bcb-526dc0a539dd +STEP: Creating secret with name s-test-opt-upd-03f91811-969b-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Deleting secret s-test-opt-del-03f917ca-969b-11e9-8bcb-526dc0a539dd +STEP: Updating secret s-test-opt-upd-03f91811-969b-11e9-8bcb-526dc0a539dd +STEP: Creating secret with name s-test-opt-create-03f9182e-969b-11e9-8bcb-526dc0a539dd +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:14:48.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-2198" for this suite. +Jun 24 16:15:10.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:15:10.957: INFO: namespace projected-2198 deletion completed in 22.118571553s + +• [SLOW TEST:96.765 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:15:10.957: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the deployment +STEP: Wait for the Deployment to create new ReplicaSet +STEP: delete the deployment +STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs +STEP: Gathering metrics +Jun 24 16:15:41.056: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: + [quantile=0.5] = 10 + [quantile=0.9] = 10 + [quantile=0.99] = 10 +For garbage_collector_attempt_to_delete_work_duration: + [quantile=0.5] = 213736 + [quantile=0.9] = 215327 + [quantile=0.99] = 215327 +For garbage_collector_attempt_to_orphan_queue_latency: + [quantile=0.5] = 18 + [quantile=0.9] = 18 + [quantile=0.99] = 18 +For garbage_collector_attempt_to_orphan_work_duration: + [quantile=0.5] = 2492 + [quantile=0.9] = 2492 + [quantile=0.99] = 2492 +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: + [quantile=0.5] = 6 + [quantile=0.9] = 8 + [quantile=0.99] = 28 +For garbage_collector_graph_changes_work_duration: + [quantile=0.5] = 16 + [quantile=0.9] = 32 + [quantile=0.99] = 67 +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: + [quantile=0.5] = 15 + [quantile=0.9] = 31 + [quantile=0.99] = 42 +For namespace_queue_latency_sum: + [] = 4282 +For namespace_queue_latency_count: + [] = 219 +For namespace_retries: + [] = 221 +For namespace_work_duration: + [quantile=0.5] = 163229 + [quantile=0.9] = 240526 + [quantile=0.99] = 286310 +For namespace_work_duration_sum: + [] = 33206364 +For namespace_work_duration_count: + [] = 219 +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:15:41.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-4563" for this suite. +Jun 24 16:15:47.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:15:47.163: INFO: namespace gc-4563 deletion completed in 6.10399564s + +• [SLOW TEST:36.206 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:15:47.165: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:15:47.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd" in namespace "projected-9338" to be "success or failure" +Jun 24 16:15:47.216: INFO: Pod "downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.916759ms +Jun 24 16:15:49.220: INFO: Pod "downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008116525s +STEP: Saw pod success +Jun 24 16:15:49.220: INFO: Pod "downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:15:49.224: INFO: Trying to get logs from node minion pod downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:15:49.244: INFO: Waiting for pod downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:15:49.247: INFO: Pod downwardapi-volume-533b7e61-969b-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:15:49.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9338" for this suite. +Jun 24 16:15:55.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:15:55.362: INFO: namespace projected-9338 deletion completed in 6.111499083s + +• [SLOW TEST:8.197 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide container's cpu request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-auth] ServiceAccounts + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:15:55.363: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +Jun 24 16:15:55.914: INFO: created pod pod-service-account-defaultsa +Jun 24 16:15:55.914: INFO: pod pod-service-account-defaultsa service account token volume mount: true +Jun 24 16:15:55.925: INFO: created pod pod-service-account-mountsa +Jun 24 16:15:55.925: INFO: pod pod-service-account-mountsa service account token volume mount: true +Jun 24 16:15:55.939: INFO: created pod pod-service-account-nomountsa +Jun 24 16:15:55.939: INFO: pod pod-service-account-nomountsa service account token volume mount: false +Jun 24 16:15:55.948: INFO: created pod pod-service-account-defaultsa-mountspec +Jun 24 16:15:55.948: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true +Jun 24 16:15:55.956: INFO: created pod pod-service-account-mountsa-mountspec +Jun 24 16:15:55.956: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true +Jun 24 16:15:55.967: INFO: created pod pod-service-account-nomountsa-mountspec +Jun 24 16:15:55.967: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true +Jun 24 16:15:55.976: INFO: created pod pod-service-account-defaultsa-nomountspec +Jun 24 16:15:55.976: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false +Jun 24 16:15:55.984: INFO: created pod pod-service-account-mountsa-nomountspec +Jun 24 16:15:55.984: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false +Jun 24 16:15:55.991: INFO: created pod pod-service-account-nomountsa-nomountspec +Jun 24 16:15:55.991: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:15:55.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-6781" for this suite. +Jun 24 16:16:18.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:16:18.097: INFO: namespace svcaccounts-6781 deletion completed in 22.099310819s + +• [SLOW TEST:22.734 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should allow opting out of API token automount [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] Probing container + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:16:18.097: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:16:42.151: INFO: Container started at 2019-06-24 16:16:19 +0000 UTC, pod became ready at 2019-06-24 16:16:41 +0000 UTC +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:16:42.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-7940" for this suite. +Jun 24 16:17:04.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:17:04.262: INFO: namespace container-probe-7940 deletion completed in 22.1074384s + +• [SLOW TEST:46.165 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:17:04.266: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 24 16:17:04.298: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:17:07.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-6332" for this suite. +Jun 24 16:17:13.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:17:13.530: INFO: namespace init-container-6332 deletion completed in 6.098679077s + +• [SLOW TEST:9.265 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] Downward API volume + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:17:13.533: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:17:13.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd" in namespace "downward-api-4220" to be "success or failure" +Jun 24 16:17:13.584: INFO: Pod "downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.965738ms +Jun 24 16:17:15.589: INFO: Pod "downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013702351s +Jun 24 16:17:17.593: INFO: Pod "downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01788101s +STEP: Saw pod success +Jun 24 16:17:17.593: INFO: Pod "downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:17:17.599: INFO: Trying to get logs from node minion pod downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:17:17.641: INFO: Waiting for pod downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:17:17.646: INFO: Pod downwardapi-volume-86b63327-969b-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:17:17.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4220" for this suite. +Jun 24 16:17:23.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:17:23.755: INFO: namespace downward-api-4220 deletion completed in 6.105455606s + +• [SLOW TEST:10.222 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] ReplicationController + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:17:23.755: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename replication-controller +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating replication controller my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd +Jun 24 16:17:23.799: INFO: Pod name my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd: Found 0 pods out of 1 +Jun 24 16:17:28.804: INFO: Pod name my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd: Found 1 pods out of 1 +Jun 24 16:17:28.804: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd" are running +Jun 24 16:17:28.808: INFO: Pod "my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd-x5286" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:17:23 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:17:26 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:17:26 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:17:23 +0000 UTC Reason: Message:}]) +Jun 24 16:17:28.808: INFO: Trying to dial the pod +Jun 24 16:17:33.825: INFO: Controller my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd: Got expected result from replica 1 [my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd-x5286]: "my-hostname-basic-8cce2f58-969b-11e9-8bcb-526dc0a539dd-x5286", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicationController + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:17:33.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replication-controller-681" for this suite. +Jun 24 16:17:39.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:17:39.922: INFO: namespace replication-controller-681 deletion completed in 6.093817068s + +• [SLOW TEST:16.167 seconds] +[sig-apps] ReplicationController +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Garbage collector + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:17:39.922: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename gc +STEP: Waiting for a default service account to be provisioned in namespace +[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the rc +STEP: delete the rc +STEP: wait for the rc to be deleted +STEP: Gathering metrics +Jun 24 16:17:46.018: INFO: For apiserver_request_total: +For apiserver_request_latencies_summary: +For apiserver_init_events_total: +For garbage_collector_attempt_to_delete_queue_latency: + [quantile=0.5] = 628 + [quantile=0.9] = 47039 + [quantile=0.99] = 55763 +For garbage_collector_attempt_to_delete_work_duration: + [quantile=0.5] = 41093 + [quantile=0.9] = 205299 + [quantile=0.99] = 206201 +For garbage_collector_attempt_to_orphan_queue_latency: + [quantile=0.5] = 18 + [quantile=0.9] = 18 + [quantile=0.99] = 18 +For garbage_collector_attempt_to_orphan_work_duration: + [quantile=0.5] = 2492 + [quantile=0.9] = 2492 + [quantile=0.99] = 2492 +For garbage_collector_dirty_processing_latency_microseconds: +For garbage_collector_event_processing_latency_microseconds: +For garbage_collector_graph_changes_queue_latency: + [quantile=0.5] = 6 + [quantile=0.9] = 8 + [quantile=0.99] = 37 +For garbage_collector_graph_changes_work_duration: + [quantile=0.5] = 17 + [quantile=0.9] = 32 + [quantile=0.99] = 65 +For garbage_collector_orphan_processing_latency_microseconds: +For namespace_queue_latency: + [quantile=0.5] = 15 + [quantile=0.9] = 29 + [quantile=0.99] = 42 +For namespace_queue_latency_sum: + [] = 4609 +For namespace_queue_latency_count: + [] = 235 +For namespace_retries: + [] = 238 +For namespace_work_duration: + [quantile=0.5] = 164087 + [quantile=0.9] = 259619 + [quantile=0.99] = 305926 +For namespace_work_duration_sum: + [] = 35443956 +For namespace_work_duration_count: + [] = 235 +For function_duration_seconds: +For errors_total: +For evicted_pods_total: + +[AfterEach] [sig-api-machinery] Garbage collector + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:17:46.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "gc-6296" for this suite. +Jun 24 16:17:52.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:17:52.123: INFO: namespace gc-6296 deletion completed in 6.099000161s + +• [SLOW TEST:12.201 seconds] +[sig-api-machinery] Garbage collector +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:17:52.123: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-http in namespace container-probe-5811 +Jun 24 16:17:54.182: INFO: Started pod liveness-http in namespace container-probe-5811 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 24 16:17:54.185: INFO: Initial restart count of pod liveness-http is 0 +Jun 24 16:18:12.226: INFO: Restart count of pod container-probe-5811/liveness-http is now 1 (18.040782178s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:18:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-5811" for this suite. +Jun 24 16:18:18.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:18:18.345: INFO: namespace container-probe-5811 deletion completed in 6.101032897s + +• [SLOW TEST:26.222 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run default + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:18:18.350: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 +[It] should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 24 16:18:18.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8613' +Jun 24 16:18:19.043: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 24 16:18:19.043: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" +STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created +[AfterEach] [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 +Jun 24 16:18:19.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete deployment e2e-test-nginx-deployment --namespace=kubectl-8613' +Jun 24 16:18:19.167: INFO: stderr: "" +Jun 24 16:18:19.167: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:18:19.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-8613" for this suite. +Jun 24 16:18:25.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:18:25.272: INFO: namespace kubectl-8613 deletion completed in 6.101412746s + +• [SLOW TEST:6.922 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run default + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create an rc or deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:18:25.272: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 24 16:18:25.315: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:18:30.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-8082" for this suite. +Jun 24 16:18:52.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:18:52.618: INFO: namespace init-container-8082 deletion completed in 22.10202777s + +• [SLOW TEST:27.345 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should invoke init containers on a RestartAlways pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl expose + should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:18:52.618: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating Redis RC +Jun 24 16:18:52.661: INFO: namespace kubectl-1879 +Jun 24 16:18:52.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-1879' +Jun 24 16:18:52.941: INFO: stderr: "" +Jun 24 16:18:52.941: INFO: stdout: "replicationcontroller/redis-master created\n" +STEP: Waiting for Redis master to start. +Jun 24 16:18:53.945: INFO: Selector matched 1 pods for map[app:redis] +Jun 24 16:18:53.945: INFO: Found 0 / 1 +Jun 24 16:18:54.944: INFO: Selector matched 1 pods for map[app:redis] +Jun 24 16:18:54.944: INFO: Found 1 / 1 +Jun 24 16:18:54.944: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 +Jun 24 16:18:54.952: INFO: Selector matched 1 pods for map[app:redis] +Jun 24 16:18:54.952: INFO: ForEach: Found 1 pods from the filter. Now looping through them. +Jun 24 16:18:54.952: INFO: wait on redis-master startup in kubectl-1879 +Jun 24 16:18:54.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 logs redis-master-wsmdn redis-master --namespace=kubectl-1879' +Jun 24 16:18:55.073: INFO: stderr: "" +Jun 24 16:18:55.073: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 24 Jun 16:18:54.145 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Jun 16:18:54.145 # Server started, Redis version 3.2.12\n1:M 24 Jun 16:18:54.145 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Jun 16:18:54.145 * The server is now ready to accept connections on port 6379\n" +STEP: exposing RC +Jun 24 16:18:55.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1879' +Jun 24 16:18:55.223: INFO: stderr: "" +Jun 24 16:18:55.223: INFO: stdout: "service/rm2 exposed\n" +Jun 24 16:18:55.227: INFO: Service rm2 in namespace kubectl-1879 found. +STEP: exposing service +Jun 24 16:18:57.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1879' +Jun 24 16:18:57.344: INFO: stderr: "" +Jun 24 16:18:57.344: INFO: stdout: "service/rm3 exposed\n" +Jun 24 16:18:57.348: INFO: Service rm3 in namespace kubectl-1879 found. +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:18:59.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1879" for this suite. +Jun 24 16:19:21.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:19:21.460: INFO: namespace kubectl-1879 deletion completed in 22.103092718s + +• [SLOW TEST:28.842 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl expose + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create services for rc [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:19:21.461: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 24 16:19:26.038: INFO: Successfully updated pod "labelsupdated2f6d732-969b-11e9-8bcb-526dc0a539dd" +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:19:28.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-1159" for this suite. +Jun 24 16:19:50.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:19:50.182: INFO: namespace projected-1159 deletion completed in 22.113828642s + +• [SLOW TEST:28.721 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-storage] ConfigMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:19:50.184: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-e41598eb-969b-11e9-8bcb-526dc0a539dd +STEP: Creating configMap with name cm-test-opt-upd-e4159931-969b-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-e41598eb-969b-11e9-8bcb-526dc0a539dd +STEP: Updating configmap cm-test-opt-upd-e4159931-969b-11e9-8bcb-526dc0a539dd +STEP: Creating configMap with name cm-test-opt-create-e415995c-969b-11e9-8bcb-526dc0a539dd +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:21:14.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2084" for this suite. +Jun 24 16:21:36.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:21:36.992: INFO: namespace configmap-2084 deletion completed in 22.098328966s + +• [SLOW TEST:106.808 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:21:36.992: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0666 on node default medium +Jun 24 16:21:37.031: INFO: Waiting up to 5m0s for pod "pod-23be209c-969c-11e9-8bcb-526dc0a539dd" in namespace "emptydir-6037" to be "success or failure" +Jun 24 16:21:37.038: INFO: Pod "pod-23be209c-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.741072ms +Jun 24 16:21:39.042: INFO: Pod "pod-23be209c-969c-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011056062s +STEP: Saw pod success +Jun 24 16:21:39.042: INFO: Pod "pod-23be209c-969c-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:21:39.045: INFO: Trying to get logs from node minion pod pod-23be209c-969c-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:21:39.066: INFO: Waiting for pod pod-23be209c-969c-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:21:39.068: INFO: Pod pod-23be209c-969c-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:21:39.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6037" for this suite. +Jun 24 16:21:45.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:21:45.180: INFO: namespace emptydir-6037 deletion completed in 6.10938531s + +• [SLOW TEST:8.188 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:21:45.180: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-map-28a15e61-969c-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:21:45.236: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd" in namespace "projected-130" to be "success or failure" +Jun 24 16:21:45.241: INFO: Pod "pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.761995ms +Jun 24 16:21:47.246: INFO: Pod "pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00995235s +STEP: Saw pod success +Jun 24 16:21:47.246: INFO: Pod "pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:21:47.249: INFO: Trying to get logs from node minion pod pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd container projected-secret-volume-test: +STEP: delete the pod +Jun 24 16:21:47.273: INFO: Waiting for pod pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:21:47.277: INFO: Pod pod-projected-secrets-28a1f701-969c-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:21:47.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-130" for this suite. +Jun 24 16:21:53.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:21:53.377: INFO: namespace projected-130 deletion completed in 6.096575535s + +• [SLOW TEST:8.197 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:21:53.378: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-projected-vcfv +STEP: Creating a pod to test atomic-volume-subpath +Jun 24 16:21:53.429: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vcfv" in namespace "subpath-4714" to be "success or failure" +Jun 24 16:21:53.431: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.693831ms +Jun 24 16:21:55.435: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 2.006776161s +Jun 24 16:21:57.439: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 4.010553249s +Jun 24 16:21:59.443: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 6.014540742s +Jun 24 16:22:01.447: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 8.018623165s +Jun 24 16:22:03.452: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 10.023047172s +Jun 24 16:22:05.456: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 12.027318521s +Jun 24 16:22:07.460: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 14.031520483s +Jun 24 16:22:09.464: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 16.035831112s +Jun 24 16:22:11.468: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 18.039835791s +Jun 24 16:22:13.472: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 20.043873736s +Jun 24 16:22:15.477: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Running", Reason="", readiness=true. Elapsed: 22.048279605s +Jun 24 16:22:17.481: INFO: Pod "pod-subpath-test-projected-vcfv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052315944s +STEP: Saw pod success +Jun 24 16:22:17.481: INFO: Pod "pod-subpath-test-projected-vcfv" satisfied condition "success or failure" +Jun 24 16:22:17.484: INFO: Trying to get logs from node minion pod pod-subpath-test-projected-vcfv container test-container-subpath-projected-vcfv: +STEP: delete the pod +Jun 24 16:22:17.513: INFO: Waiting for pod pod-subpath-test-projected-vcfv to disappear +Jun 24 16:22:17.516: INFO: Pod pod-subpath-test-projected-vcfv no longer exists +STEP: Deleting pod pod-subpath-test-projected-vcfv +Jun 24 16:22:17.516: INFO: Deleting pod "pod-subpath-test-projected-vcfv" in namespace "subpath-4714" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:22:17.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-4714" for this suite. +Jun 24 16:22:23.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:22:23.613: INFO: namespace subpath-4714 deletion completed in 6.091157664s + +• [SLOW TEST:30.236 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with projected pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:22:23.614: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0644 on node default medium +Jun 24 16:22:23.667: INFO: Waiting up to 5m0s for pod "pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd" in namespace "emptydir-1883" to be "success or failure" +Jun 24 16:22:23.676: INFO: Pod "pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.81369ms +Jun 24 16:22:25.679: INFO: Pod "pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012209928s +Jun 24 16:22:27.683: INFO: Pod "pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016051296s +STEP: Saw pod success +Jun 24 16:22:27.683: INFO: Pod "pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:22:27.687: INFO: Trying to get logs from node minion pod pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:22:27.724: INFO: Waiting for pod pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:22:27.727: INFO: Pod pod-3f8a241e-969c-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:22:27.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-1883" for this suite. +Jun 24 16:22:33.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:22:33.831: INFO: namespace emptydir-1883 deletion completed in 6.100814704s + +• [SLOW TEST:10.217 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:22:33.832: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 24 16:22:33.894: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:22:33.898: INFO: Number of nodes with available pods: 0 +Jun 24 16:22:33.898: INFO: Node minion is running more than one daemon pod +Jun 24 16:22:34.903: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:22:34.908: INFO: Number of nodes with available pods: 0 +Jun 24 16:22:34.908: INFO: Node minion is running more than one daemon pod +Jun 24 16:22:35.903: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:22:35.907: INFO: Number of nodes with available pods: 0 +Jun 24 16:22:35.907: INFO: Node minion is running more than one daemon pod +Jun 24 16:22:36.903: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:22:36.907: INFO: Number of nodes with available pods: 1 +Jun 24 16:22:36.907: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. +Jun 24 16:22:36.924: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:22:36.930: INFO: Number of nodes with available pods: 1 +Jun 24 16:22:36.930: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Wait for the failed daemon pod to be completely deleted. +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7067, will wait for the garbage collector to delete the pods +Jun 24 16:22:38.008: INFO: Deleting DaemonSet.extensions daemon-set took: 6.541326ms +Jun 24 16:22:38.308: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.358131ms +Jun 24 16:22:46.812: INFO: Number of nodes with available pods: 0 +Jun 24 16:22:46.812: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 24 16:22:46.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7067/daemonsets","resourceVersion":"10758"},"items":null} + +Jun 24 16:22:46.818: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7067/pods","resourceVersion":"10758"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:22:46.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-7067" for this suite. +Jun 24 16:22:52.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:22:52.929: INFO: namespace daemonsets-7067 deletion completed in 6.099996889s + +• [SLOW TEST:19.097 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should retry creating failed daemon pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl version + should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:22:52.930: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:22:52.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 version' +Jun 24 16:22:53.064: INFO: stderr: "" +Jun 24 16:22:53.065: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:44:30Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"14\", GitVersion:\"v1.14.3\", GitCommit:\"5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0\", GitTreeState:\"clean\", BuildDate:\"2019-06-06T01:36:19Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:22:53.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-5492" for this suite. +Jun 24 16:22:59.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:22:59.190: INFO: namespace kubectl-5492 deletion completed in 6.118448783s + +• [SLOW TEST:6.259 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl version + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check is all data is printed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run job + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:22:59.190: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1510 +[It] should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 24 16:22:59.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1568' +Jun 24 16:22:59.354: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 24 16:22:59.354: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" +STEP: verifying the job e2e-test-nginx-job was created +[AfterEach] [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1515 +Jun 24 16:22:59.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete jobs e2e-test-nginx-job --namespace=kubectl-1568' +Jun 24 16:22:59.455: INFO: stderr: "" +Jun 24 16:22:59.455: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:22:59.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-1568" for this suite. +Jun 24 16:23:21.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:23:21.571: INFO: namespace kubectl-1568 deletion completed in 22.107682037s + +• [SLOW TEST:22.381 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a job from an image when restart is OnFailure [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Namespaces [Serial] + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:23:21.582: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename namespaces +STEP: Waiting for a default service account to be provisioned in namespace +[It] should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test namespace +STEP: Waiting for a default service account to be provisioned in namespace +STEP: Creating a service in the namespace +STEP: Deleting the namespace +STEP: Waiting for the namespace to be removed. +STEP: Recreating the namespace +STEP: Verifying there is no service in the namespace +[AfterEach] [sig-api-machinery] Namespaces [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:23:27.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "namespaces-2313" for this suite. +Jun 24 16:23:33.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:23:33.823: INFO: namespace namespaces-2313 deletion completed in 6.095178922s +STEP: Destroying namespace "nsdeletetest-7558" for this suite. +Jun 24 16:23:33.825: INFO: Namespace nsdeletetest-7558 was already deleted +STEP: Destroying namespace "nsdeletetest-9541" for this suite. +Jun 24 16:23:39.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:23:39.909: INFO: namespace nsdeletetest-9541 deletion completed in 6.084154098s + +• [SLOW TEST:18.327 seconds] +[sig-api-machinery] Namespaces [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should ensure that all services are removed when a namespace is deleted [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Probing container + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:23:39.910: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-6 +Jun 24 16:23:41.957: INFO: Started pod liveness-exec in namespace container-probe-6 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 24 16:23:41.960: INFO: Initial restart count of pod liveness-exec is 0 +Jun 24 16:24:32.064: INFO: Restart count of pod container-probe-6/liveness-exec is now 1 (50.103348867s elapsed) +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:24:32.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-6" for this suite. +Jun 24 16:24:38.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:24:38.194: INFO: namespace container-probe-6 deletion completed in 6.107504629s + +• [SLOW TEST:58.285 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:24:38.202: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name cm-test-opt-del-8fc2a413-969c-11e9-8bcb-526dc0a539dd +STEP: Creating configMap with name cm-test-opt-upd-8fc2a4eb-969c-11e9-8bcb-526dc0a539dd +STEP: Creating the pod +STEP: Deleting configmap cm-test-opt-del-8fc2a413-969c-11e9-8bcb-526dc0a539dd +STEP: Updating configmap cm-test-opt-upd-8fc2a4eb-969c-11e9-8bcb-526dc0a539dd +STEP: Creating configMap with name cm-test-opt-create-8fc2a50c-969c-11e9-8bcb-526dc0a539dd +STEP: waiting to observe update in volume +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:25:50.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9015" for this suite. +Jun 24 16:26:12.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:26:12.934: INFO: namespace projected-9015 deletion completed in 22.105141804s + +• [SLOW TEST:94.732 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + optional updates should be reflected in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:26:12.934: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:26:12.975: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd" in namespace "projected-9176" to be "success or failure" +Jun 24 16:26:12.979: INFO: Pod "downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154901ms +Jun 24 16:26:14.984: INFO: Pod "downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008437277s +STEP: Saw pod success +Jun 24 16:26:14.984: INFO: Pod "downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:26:14.988: INFO: Trying to get logs from node minion pod downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:26:15.027: INFO: Waiting for pod downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:26:15.030: INFO: Pod downwardapi-volume-c8378e08-969c-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:26:15.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9176" for this suite. +Jun 24 16:26:21.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:26:21.141: INFO: namespace projected-9176 deletion completed in 6.106487039s + +• [SLOW TEST:8.207 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run deployment + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:26:21.143: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1455 +[It] should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 24 16:26:21.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/v1beta1 --namespace=kubectl-440' +Jun 24 16:26:21.304: INFO: stderr: "kubectl run --generator=deployment/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 24 16:26:21.304: INFO: stdout: "deployment.extensions/e2e-test-nginx-deployment created\n" +STEP: verifying the deployment e2e-test-nginx-deployment was created +STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created +[AfterEach] [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 +Jun 24 16:26:23.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete deployment e2e-test-nginx-deployment --namespace=kubectl-440' +Jun 24 16:26:23.435: INFO: stderr: "" +Jun 24 16:26:23.435: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:26:23.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-440" for this suite. +Jun 24 16:26:45.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:26:45.531: INFO: namespace kubectl-440 deletion completed in 22.091399666s + +• [SLOW TEST:24.389 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a deployment from an image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:26:45.532: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating pod +Jun 24 16:26:47.581: INFO: Pod pod-hostip-dba4eedd-969c-11e9-8bcb-526dc0a539dd has hostIP: 10.1.0.12 +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:26:47.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-326" for this suite. +Jun 24 16:27:09.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:27:09.687: INFO: namespace pods-326 deletion completed in 22.102225994s + +• [SLOW TEST:24.156 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should get a host IP [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:27:09.692: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-map-ea0c69af-969c-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:27:09.735: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd" in namespace "projected-9728" to be "success or failure" +Jun 24 16:27:09.744: INFO: Pod "pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.662598ms +Jun 24 16:27:11.748: INFO: Pod "pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012518992s +Jun 24 16:27:13.752: INFO: Pod "pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016563024s +STEP: Saw pod success +Jun 24 16:27:13.752: INFO: Pod "pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:27:13.755: INFO: Trying to get logs from node minion pod pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: +STEP: delete the pod +Jun 24 16:27:13.779: INFO: Waiting for pod pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:27:13.781: INFO: Pod pod-projected-configmaps-ea0ce8ff-969c-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:27:13.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9728" for this suite. +Jun 24 16:27:19.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:27:19.879: INFO: namespace projected-9728 deletion completed in 6.094879475s + +• [SLOW TEST:10.187 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:27:19.879: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating simple DaemonSet "daemon-set" +STEP: Check that daemon pods launch on every node of the cluster. +Jun 24 16:27:19.942: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:19.947: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:19.947: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:20.952: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:20.956: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:20.956: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:21.952: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:21.957: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:21.957: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:22.951: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:22.955: INFO: Number of nodes with available pods: 1 +Jun 24 16:27:22.955: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Stop a daemon pod, check that the daemon pod is revived. +Jun 24 16:27:22.971: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:22.973: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:22.973: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:23.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:23.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:23.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:24.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:24.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:24.983: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:25.979: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:25.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:25.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:26.986: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:26.990: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:26.990: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:27.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:27.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:27.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:28.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:28.981: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:28.981: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:29.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:29.981: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:29.981: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:30.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:30.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:30.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:31.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:31.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:31.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:32.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:32.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:32.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:33.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:33.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:33.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:34.979: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:34.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:34.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:35.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:35.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:35.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:36.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:36.982: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:36.982: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:37.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:37.983: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:37.983: INFO: Node minion is running more than one daemon pod +Jun 24 16:27:38.978: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:27:38.982: INFO: Number of nodes with available pods: 1 +Jun 24 16:27:38.982: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8011, will wait for the garbage collector to delete the pods +Jun 24 16:27:39.047: INFO: Deleting DaemonSet.extensions daemon-set took: 8.68959ms +Jun 24 16:27:39.347: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.445072ms +Jun 24 16:27:46.858: INFO: Number of nodes with available pods: 0 +Jun 24 16:27:46.858: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 24 16:27:46.861: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8011/daemonsets","resourceVersion":"11527"},"items":null} + +Jun 24 16:27:46.865: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8011/pods","resourceVersion":"11527"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:27:46.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8011" for this suite. +Jun 24 16:27:52.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:27:52.974: INFO: namespace daemonsets-8011 deletion completed in 6.099412421s + +• [SLOW TEST:33.095 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should run and stop simple daemon [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:27:52.976: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:27:53.013: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) +Jun 24 16:27:53.024: INFO: Pod name sample-pod: Found 0 pods out of 1 +Jun 24 16:27:58.028: INFO: Pod name sample-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 24 16:27:58.029: INFO: Creating deployment "test-rolling-update-deployment" +Jun 24 16:27:58.035: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has +Jun 24 16:27:58.041: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created +Jun 24 16:28:00.048: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected +Jun 24 16:28:00.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696990478, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696990478, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696990478, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696990478, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67599b4d9\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:28:02.056: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 24 16:28:02.067: INFO: Deployment "test-rolling-update-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-6995,SelfLink:/apis/apps/v1/namespaces/deployment-6995/deployments/test-rolling-update-deployment,UID:06d71de3-969d-11e9-b70d-fa163ef83c94,ResourceVersion:11611,Generation:1,CreationTimestamp:2019-06-24 16:27:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-24 16:27:58 +0000 UTC 2019-06-24 16:27:58 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-24 16:28:00 +0000 UTC 2019-06-24 16:27:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-67599b4d9" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 24 16:28:02.070: INFO: New ReplicaSet "test-rolling-update-deployment-67599b4d9" of Deployment "test-rolling-update-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9,GenerateName:,Namespace:deployment-6995,SelfLink:/apis/apps/v1/namespaces/deployment-6995/replicasets/test-rolling-update-deployment-67599b4d9,UID:06d95fbc-969d-11e9-b70d-fa163ef83c94,ResourceVersion:11601,Generation:1,CreationTimestamp:2019-06-24 16:27:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 06d71de3-969d-11e9-b70d-fa163ef83c94 0xc0023e7f10 0xc0023e7f11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 24 16:28:02.070: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": +Jun 24 16:28:02.070: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-6995,SelfLink:/apis/apps/v1/namespaces/deployment-6995/replicasets/test-rolling-update-controller,UID:03d9c33e-969d-11e9-b70d-fa163ef83c94,ResourceVersion:11610,Generation:2,CreationTimestamp:2019-06-24 16:27:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 06d71de3-969d-11e9-b70d-fa163ef83c94 0xc0023e7e47 0xc0023e7e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:28:02.074: INFO: Pod "test-rolling-update-deployment-67599b4d9-s8pq5" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-67599b4d9-s8pq5,GenerateName:test-rolling-update-deployment-67599b4d9-,Namespace:deployment-6995,SelfLink:/api/v1/namespaces/deployment-6995/pods/test-rolling-update-deployment-67599b4d9-s8pq5,UID:06d9f570-969d-11e9-b70d-fa163ef83c94,ResourceVersion:11600,Generation:0,CreationTimestamp:2019-06-24 16:27:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 67599b4d9,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-67599b4d9 06d95fbc-969d-11e9-b70d-fa163ef83c94 0xc001d55530 0xc001d55531}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t9s2s {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t9s2s,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t9s2s true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001d55770} {node.kubernetes.io/unreachable Exists NoExecute 0xc001d557a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:27:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:28:00 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:28:00 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:27:58 +0000 UTC }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.6,StartTime:2019-06-24 16:27:58 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-24 16:27:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://12b53f6b3a14d20287a776e029988673ba3f9e2b8264289b1b52a68212c5ae39}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:28:02.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-6995" for this suite. +Jun 24 16:28:08.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:28:08.175: INFO: namespace deployment-6995 deletion completed in 6.097879656s + +• [SLOW TEST:15.200 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + RollingUpdateDeployment should delete old pods and create new ones [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-storage] Secrets + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:28:08.177: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-map-0ce7e4f1-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:28:08.218: INFO: Waiting up to 5m0s for pod "pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd" in namespace "secrets-2402" to be "success or failure" +Jun 24 16:28:08.221: INFO: Pod "pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.069447ms +Jun 24 16:28:10.225: INFO: Pod "pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007160226s +Jun 24 16:28:12.229: INFO: Pod "pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011102081s +STEP: Saw pod success +Jun 24 16:28:12.229: INFO: Pod "pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:28:12.233: INFO: Trying to get logs from node minion pod pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd container secret-volume-test: +STEP: delete the pod +Jun 24 16:28:12.261: INFO: Waiting for pod pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:28:12.266: INFO: Pod pod-secrets-0ce896ac-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:28:12.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-2402" for this suite. +Jun 24 16:28:18.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:28:18.373: INFO: namespace secrets-2402 deletion completed in 6.103259476s + +• [SLOW TEST:10.196 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be consumable from pods in volume with mappings [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:28:18.373: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating service multi-endpoint-test in namespace services-5750 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5750 to expose endpoints map[] +Jun 24 16:28:18.426: INFO: Get endpoints failed (3.57465ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found +Jun 24 16:28:19.430: INFO: successfully validated that service multi-endpoint-test in namespace services-5750 exposes endpoints map[] (1.007140318s elapsed) +STEP: Creating pod pod1 in namespace services-5750 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5750 to expose endpoints map[pod1:[100]] +Jun 24 16:28:22.469: INFO: successfully validated that service multi-endpoint-test in namespace services-5750 exposes endpoints map[pod1:[100]] (3.030198836s elapsed) +STEP: Creating pod pod2 in namespace services-5750 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5750 to expose endpoints map[pod1:[100] pod2:[101]] +Jun 24 16:28:24.511: INFO: successfully validated that service multi-endpoint-test in namespace services-5750 exposes endpoints map[pod1:[100] pod2:[101]] (2.034977377s elapsed) +STEP: Deleting pod pod1 in namespace services-5750 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5750 to expose endpoints map[pod2:[101]] +Jun 24 16:28:24.534: INFO: successfully validated that service multi-endpoint-test in namespace services-5750 exposes endpoints map[pod2:[101]] (14.727633ms elapsed) +STEP: Deleting pod pod2 in namespace services-5750 +STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5750 to expose endpoints map[] +Jun 24 16:28:25.545: INFO: successfully validated that service multi-endpoint-test in namespace services-5750 exposes endpoints map[] (1.005643982s elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:28:25.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-5750" for this suite. +Jun 24 16:28:47.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:28:47.676: INFO: namespace services-5750 deletion completed in 22.098083427s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:29.303 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve multiport endpoints from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected secret + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:28:47.677: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating projection with secret that has name projected-secret-test-2474b969-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:28:47.728: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-9394" to be "success or failure" +Jun 24 16:28:47.730: INFO: Pod "pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254737ms +Jun 24 16:28:49.734: INFO: Pod "pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006203406s +STEP: Saw pod success +Jun 24 16:28:49.734: INFO: Pod "pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:28:49.737: INFO: Trying to get logs from node minion pod pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd container projected-secret-volume-test: +STEP: delete the pod +Jun 24 16:28:49.760: INFO: Waiting for pod pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:28:49.765: INFO: Pod pod-projected-secrets-24756fb1-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:28:49.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9394" for this suite. +Jun 24 16:28:55.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:28:55.873: INFO: namespace projected-9394 deletion completed in 6.105029639s + +• [SLOW TEST:8.197 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:28:55.874: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override command +Jun 24 16:28:55.925: INFO: Waiting up to 5m0s for pod "client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd" in namespace "containers-8487" to be "success or failure" +Jun 24 16:28:55.932: INFO: Pod "client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.585572ms +Jun 24 16:28:57.936: INFO: Pod "client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01027915s +STEP: Saw pod success +Jun 24 16:28:57.936: INFO: Pod "client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:28:57.941: INFO: Trying to get logs from node minion pod client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:28:57.964: INFO: Waiting for pod client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:28:57.966: INFO: Pod client-containers-2957ef97-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:28:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-8487" for this suite. +Jun 24 16:29:03.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:29:04.070: INFO: namespace containers-8487 deletion completed in 6.100634312s + +• [SLOW TEST:8.195 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:29:04.070: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 24 16:29:06.652: INFO: Successfully updated pod "labelsupdate2e3b0771-969d-11e9-8bcb-526dc0a539dd" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:29:08.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-6239" for this suite. +Jun 24 16:29:30.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:29:30.793: INFO: namespace downward-api-6239 deletion completed in 22.109019875s + +• [SLOW TEST:26.723 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update labels on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:29:30.793: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:29:30.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-9169" for this suite. +Jun 24 16:29:36.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:29:36.941: INFO: namespace services-9169 deletion completed in 6.104507132s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:6.148 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide secure master service [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:29:36.942: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-41d160d0-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:29:36.985: INFO: Waiting up to 5m0s for pod "pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd" in namespace "configmap-5793" to be "success or failure" +Jun 24 16:29:36.992: INFO: Pod "pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.842962ms +Jun 24 16:29:38.996: INFO: Pod "pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011210032s +STEP: Saw pod success +Jun 24 16:29:38.996: INFO: Pod "pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:29:39.000: INFO: Trying to get logs from node minion pod pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd container configmap-volume-test: +STEP: delete the pod +Jun 24 16:29:39.028: INFO: Waiting for pod pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:29:39.031: INFO: Pod pod-configmaps-41d1bd5d-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:29:39.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-5793" for this suite. +Jun 24 16:29:45.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:29:45.146: INFO: namespace configmap-5793 deletion completed in 6.111584447s + +• [SLOW TEST:8.204 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected combined + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected combined + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:29:45.147: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-projected-all-test-volume-46b52af8-969d-11e9-8bcb-526dc0a539dd +STEP: Creating secret with name secret-projected-all-test-volume-46b52ad9-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test Check all projections for projected volume plugin +Jun 24 16:29:45.201: INFO: Waiting up to 5m0s for pod "projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-9562" to be "success or failure" +Jun 24 16:29:45.212: INFO: Pod "projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.128848ms +Jun 24 16:29:47.216: INFO: Pod "projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014396754s +STEP: Saw pod success +Jun 24 16:29:47.216: INFO: Pod "projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:29:47.220: INFO: Trying to get logs from node minion pod projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd container projected-all-volume-test: +STEP: delete the pod +Jun 24 16:29:47.246: INFO: Waiting for pod projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:29:47.249: INFO: Pod projected-volume-46b52a7c-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected combined + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:29:47.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9562" for this suite. +Jun 24 16:29:53.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:29:53.350: INFO: namespace projected-9562 deletion completed in 6.098131343s + +• [SLOW TEST:8.203 seconds] +[sig-storage] Projected combined +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 + should project all components that make up the projection API [Projection][NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Daemon set [Serial] + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:29:53.355: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename daemonsets +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 +[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:29:53.411: INFO: Creating simple daemon set daemon-set +STEP: Check that daemon pods launch on every node of the cluster. +Jun 24 16:29:53.418: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:53.427: INFO: Number of nodes with available pods: 0 +Jun 24 16:29:53.427: INFO: Node minion is running more than one daemon pod +Jun 24 16:29:54.431: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:54.434: INFO: Number of nodes with available pods: 0 +Jun 24 16:29:54.434: INFO: Node minion is running more than one daemon pod +Jun 24 16:29:55.431: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:55.435: INFO: Number of nodes with available pods: 1 +Jun 24 16:29:55.435: INFO: Number of running nodes: 1, number of available pods: 1 +STEP: Update daemon pods image. +STEP: Check that daemon pods images are updated. +Jun 24 16:29:55.458: INFO: Wrong image for pod: daemon-set-4vc8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 24 16:29:55.468: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:56.472: INFO: Wrong image for pod: daemon-set-4vc8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 24 16:29:56.477: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:57.472: INFO: Wrong image for pod: daemon-set-4vc8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 24 16:29:57.476: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:58.472: INFO: Wrong image for pod: daemon-set-4vc8s. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. +Jun 24 16:29:58.472: INFO: Pod daemon-set-4vc8s is not available +Jun 24 16:29:58.476: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:59.472: INFO: Pod daemon-set-6p2wc is not available +Jun 24 16:29:59.478: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +STEP: Check that daemon pods are still running on every node of the cluster. +Jun 24 16:29:59.482: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:29:59.486: INFO: Number of nodes with available pods: 0 +Jun 24 16:29:59.486: INFO: Node minion is running more than one daemon pod +Jun 24 16:30:00.490: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:30:00.494: INFO: Number of nodes with available pods: 0 +Jun 24 16:30:00.494: INFO: Node minion is running more than one daemon pod +Jun 24 16:30:01.492: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node +Jun 24 16:30:01.495: INFO: Number of nodes with available pods: 1 +Jun 24 16:30:01.495: INFO: Number of running nodes: 1, number of available pods: 1 +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 +STEP: Deleting DaemonSet "daemon-set" +STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8312, will wait for the garbage collector to delete the pods +Jun 24 16:30:01.571: INFO: Deleting DaemonSet.extensions daemon-set took: 7.391006ms +Jun 24 16:30:01.871: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.392311ms +Jun 24 16:30:05.575: INFO: Number of nodes with available pods: 0 +Jun 24 16:30:05.575: INFO: Number of running nodes: 0, number of available pods: 0 +Jun 24 16:30:05.578: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8312/daemonsets","resourceVersion":"12089"},"items":null} + +Jun 24 16:30:05.581: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8312/pods","resourceVersion":"12089"},"items":null} + +[AfterEach] [sig-apps] Daemon set [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:30:05.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "daemonsets-8312" for this suite. +Jun 24 16:30:11.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:30:11.693: INFO: namespace daemonsets-8312 deletion completed in 6.091929589s + +• [SLOW TEST:18.339 seconds] +[sig-apps] Daemon set [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should update pod when spec was updated and update strategy is RollingUpdate [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:30:11.694: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on node default medium +Jun 24 16:30:11.750: INFO: Waiting up to 5m0s for pod "pod-56889c21-969d-11e9-8bcb-526dc0a539dd" in namespace "emptydir-9880" to be "success or failure" +Jun 24 16:30:11.753: INFO: Pod "pod-56889c21-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.808394ms +Jun 24 16:30:13.757: INFO: Pod "pod-56889c21-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007653131s +STEP: Saw pod success +Jun 24 16:30:13.757: INFO: Pod "pod-56889c21-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:30:13.766: INFO: Trying to get logs from node minion pod pod-56889c21-969d-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:30:13.787: INFO: Waiting for pod pod-56889c21-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:30:13.791: INFO: Pod pod-56889c21-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:30:13.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9880" for this suite. +Jun 24 16:30:19.806: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:30:19.890: INFO: namespace emptydir-9880 deletion completed in 6.09541278s + +• [SLOW TEST:8.196 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[sig-storage] Projected downwardAPI + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:30:19.890: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:30:19.927: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-6258" to be "success or failure" +Jun 24 16:30:19.945: INFO: Pod "downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.239751ms +Jun 24 16:30:21.949: INFO: Pod "downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021678739s +STEP: Saw pod success +Jun 24 16:30:21.949: INFO: Pod "downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:30:21.954: INFO: Trying to get logs from node minion pod downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:30:21.980: INFO: Waiting for pod downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:30:21.986: INFO: Pod downwardapi-volume-5b69e985-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:30:21.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-6258" for this suite. +Jun 24 16:30:28.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:30:28.104: INFO: namespace projected-6258 deletion completed in 6.114964713s + +• [SLOW TEST:8.214 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SS +------------------------------ +[sig-apps] ReplicaSet + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:30:28.105: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename replicaset +STEP: Waiting for a default service account to be provisioned in namespace +[It] should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:30:28.146: INFO: Creating ReplicaSet my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd +Jun 24 16:30:28.154: INFO: Pod name my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd: Found 0 pods out of 1 +Jun 24 16:30:33.159: INFO: Pod name my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd: Found 1 pods out of 1 +Jun 24 16:30:33.159: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd" is running +Jun 24 16:30:33.163: INFO: Pod "my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd-6zmnk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:30:28 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:30:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:30:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-06-24 16:30:28 +0000 UTC Reason: Message:}]) +Jun 24 16:30:33.163: INFO: Trying to dial the pod +Jun 24 16:30:38.180: INFO: Controller my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd: Got expected result from replica 1 [my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd-6zmnk]: "my-hostname-basic-6050fef5-969d-11e9-8bcb-526dc0a539dd-6zmnk", 1 of 1 required successes so far +[AfterEach] [sig-apps] ReplicaSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:30:38.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "replicaset-3561" for this suite. +Jun 24 16:30:44.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:30:44.294: INFO: namespace replicaset-3561 deletion completed in 6.106937413s + +• [SLOW TEST:16.190 seconds] +[sig-apps] ReplicaSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + should serve a basic image on each replica with a public image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:30:44.295: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:30:44.332: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd" in namespace "downward-api-1443" to be "success or failure" +Jun 24 16:30:44.336: INFO: Pod "downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.965195ms +Jun 24 16:30:46.341: INFO: Pod "downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008281932s +Jun 24 16:30:48.345: INFO: Pod "downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01256802s +STEP: Saw pod success +Jun 24 16:30:48.345: INFO: Pod "downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:30:48.349: INFO: Trying to get logs from node minion pod downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:30:48.373: INFO: Waiting for pod downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:30:48.378: INFO: Pod downwardapi-volume-69f5b219-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:30:48.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-1443" for this suite. +Jun 24 16:30:54.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:30:54.498: INFO: namespace downward-api-1443 deletion completed in 6.116509154s + +• [SLOW TEST:10.203 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's cpu limit [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Subpath Atomic writer volumes + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:30:54.498: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename subpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 +STEP: Setting up data +[It] should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod pod-subpath-test-downwardapi-ld88 +STEP: Creating a pod to test atomic-volume-subpath +Jun 24 16:30:54.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ld88" in namespace "subpath-7091" to be "success or failure" +Jun 24 16:30:54.556: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.540142ms +Jun 24 16:30:56.560: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007378343s +Jun 24 16:30:58.565: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 4.012050298s +Jun 24 16:31:00.569: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 6.016199613s +Jun 24 16:31:02.573: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 8.020189326s +Jun 24 16:31:04.577: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 10.024205894s +Jun 24 16:31:06.581: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 12.027997124s +Jun 24 16:31:08.585: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 14.031753928s +Jun 24 16:31:10.590: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 16.037510534s +Jun 24 16:31:12.595: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 18.041736645s +Jun 24 16:31:14.599: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Running", Reason="", readiness=true. Elapsed: 20.045867195s +Jun 24 16:31:16.603: INFO: Pod "pod-subpath-test-downwardapi-ld88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050011993s +STEP: Saw pod success +Jun 24 16:31:16.603: INFO: Pod "pod-subpath-test-downwardapi-ld88" satisfied condition "success or failure" +Jun 24 16:31:16.607: INFO: Trying to get logs from node minion pod pod-subpath-test-downwardapi-ld88 container test-container-subpath-downwardapi-ld88: +STEP: delete the pod +Jun 24 16:31:16.627: INFO: Waiting for pod pod-subpath-test-downwardapi-ld88 to disappear +Jun 24 16:31:16.629: INFO: Pod pod-subpath-test-downwardapi-ld88 no longer exists +STEP: Deleting pod pod-subpath-test-downwardapi-ld88 +Jun 24 16:31:16.629: INFO: Deleting pod "pod-subpath-test-downwardapi-ld88" in namespace "subpath-7091" +[AfterEach] [sig-storage] Subpath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:31:16.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "subpath-7091" for this suite. +Jun 24 16:31:22.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:31:22.742: INFO: namespace subpath-7091 deletion completed in 6.105527884s + +• [SLOW TEST:28.244 seconds] +[sig-storage] Subpath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + Atomic writer volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 + should support subpaths with downward pod [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected downwardAPI + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:31:22.742: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:31:22.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-176" to be "success or failure" +Jun 24 16:31:22.804: INFO: Pod "downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.400997ms +Jun 24 16:31:24.808: INFO: Pod "downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013373729s +STEP: Saw pod success +Jun 24 16:31:24.808: INFO: Pod "downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:31:24.812: INFO: Trying to get logs from node minion pod downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:31:24.836: INFO: Waiting for pod downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:31:24.839: INFO: Pod downwardapi-volume-80e2b32e-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:31:24.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-176" for this suite. +Jun 24 16:31:30.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:31:30.932: INFO: namespace projected-176 deletion completed in 6.089771995s + +• [SLOW TEST:8.189 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[k8s.io] Pods + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:31:30.932: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 24 16:31:33.509: INFO: Successfully updated pod "pod-update-85c478da-969d-11e9-8bcb-526dc0a539dd" +STEP: verifying the updated pod is in kubernetes +Jun 24 16:31:33.517: INFO: Pod update OK +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:31:33.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1198" for this suite. +Jun 24 16:31:49.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:31:49.623: INFO: namespace pods-1198 deletion completed in 16.101027795s + +• [SLOW TEST:18.691 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:31:49.623: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-90e6245f-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:31:49.662: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-9193" to be "success or failure" +Jun 24 16:31:49.669: INFO: Pod "pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.521233ms +Jun 24 16:31:51.673: INFO: Pod "pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010490835s +STEP: Saw pod success +Jun 24 16:31:51.673: INFO: Pod "pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:31:51.676: INFO: Trying to get logs from node minion pod pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: +STEP: delete the pod +Jun 24 16:31:51.703: INFO: Waiting for pod pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:31:51.705: INFO: Pod pod-projected-configmaps-90e6a839-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:31:51.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9193" for this suite. +Jun 24 16:31:57.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:31:57.800: INFO: namespace projected-9193 deletion completed in 6.09124445s + +• [SLOW TEST:8.177 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Variable Expansion + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:31:57.800: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test env composition +Jun 24 16:31:57.847: INFO: Waiting up to 5m0s for pod "var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd" in namespace "var-expansion-4725" to be "success or failure" +Jun 24 16:31:57.856: INFO: Pod "var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.541343ms +Jun 24 16:31:59.860: INFO: Pod "var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013526597s +STEP: Saw pod success +Jun 24 16:31:59.860: INFO: Pod "var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:31:59.863: INFO: Trying to get logs from node minion pod var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 16:31:59.885: INFO: Waiting for pod var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:31:59.888: INFO: Pod var-expansion-95c702fe-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:31:59.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-4725" for this suite. +Jun 24 16:32:05.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:32:05.989: INFO: namespace var-expansion-4725 deletion completed in 6.09748296s + +• [SLOW TEST:8.189 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow composing env vars into new env vars [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update + should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:32:05.990: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 +[It] should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 24 16:32:06.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4103' +Jun 24 16:32:06.720: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" +Jun 24 16:32:06.720: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" +STEP: verifying the rc e2e-test-nginx-rc was created +STEP: rolling-update to same image controller +Jun 24 16:32:06.735: INFO: scanned /root for discovery docs: +Jun 24 16:32:06.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4103' +Jun 24 16:32:22.566: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Jun 24 16:32:22.566: INFO: stdout: "Created e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac\nScaling up e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +Jun 24 16:32:22.566: INFO: stdout: "Created e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac\nScaling up e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" +STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. +Jun 24 16:32:22.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-4103' +Jun 24 16:32:22.697: INFO: stderr: "" +Jun 24 16:32:22.697: INFO: stdout: "e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac-9v6x6 " +Jun 24 16:32:22.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac-9v6x6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4103' +Jun 24 16:32:22.789: INFO: stderr: "" +Jun 24 16:32:22.789: INFO: stdout: "true" +Jun 24 16:32:22.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac-9v6x6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4103' +Jun 24 16:32:22.884: INFO: stderr: "" +Jun 24 16:32:22.884: INFO: stdout: "docker.io/library/nginx:1.14-alpine" +Jun 24 16:32:22.884: INFO: e2e-test-nginx-rc-aeef45e2b1f72c9058a8dcf26ec5bfac-9v6x6 is verified up and running +[AfterEach] [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 +Jun 24 16:32:22.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete rc e2e-test-nginx-rc --namespace=kubectl-4103' +Jun 24 16:32:22.986: INFO: stderr: "" +Jun 24 16:32:22.986: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:32:22.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4103" for this suite. +Jun 24 16:32:29.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:32:29.101: INFO: namespace kubectl-4103 deletion completed in 6.105726091s + +• [SLOW TEST:23.111 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl rolling-update + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support rolling-update to same image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Proxy server + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:32:29.102: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: starting the proxy server +Jun 24 16:32:29.142: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-766262415 proxy -p 0 --disable-filter' +STEP: curling proxy /api/ output +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:32:29.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-736" for this suite. +Jun 24 16:32:35.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:32:35.333: INFO: namespace kubectl-736 deletion completed in 6.09933232s + +• [SLOW TEST:6.232 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Proxy server + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support proxy with --port 0 [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Services + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:32:35.337: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename services +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:86 +[It] should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating service endpoint-test2 in namespace services-8680 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8680 to expose endpoints map[] +Jun 24 16:32:35.389: INFO: successfully validated that service endpoint-test2 in namespace services-8680 exposes endpoints map[] (4.934356ms elapsed) +STEP: Creating pod pod1 in namespace services-8680 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8680 to expose endpoints map[pod1:[80]] +Jun 24 16:32:37.428: INFO: successfully validated that service endpoint-test2 in namespace services-8680 exposes endpoints map[pod1:[80]] (2.033544328s elapsed) +STEP: Creating pod pod2 in namespace services-8680 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8680 to expose endpoints map[pod1:[80] pod2:[80]] +Jun 24 16:32:39.476: INFO: successfully validated that service endpoint-test2 in namespace services-8680 exposes endpoints map[pod1:[80] pod2:[80]] (2.041484862s elapsed) +STEP: Deleting pod pod1 in namespace services-8680 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8680 to expose endpoints map[pod2:[80]] +Jun 24 16:32:40.506: INFO: successfully validated that service endpoint-test2 in namespace services-8680 exposes endpoints map[pod2:[80]] (1.020626176s elapsed) +STEP: Deleting pod pod2 in namespace services-8680 +STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8680 to expose endpoints map[] +Jun 24 16:32:40.524: INFO: successfully validated that service endpoint-test2 in namespace services-8680 exposes endpoints map[] (10.711003ms elapsed) +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:32:40.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "services-8680" for this suite. +Jun 24 16:32:46.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:32:46.658: INFO: namespace services-8680 deletion completed in 6.108941568s +[AfterEach] [sig-network] Services + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 + +• [SLOW TEST:11.322 seconds] +[sig-network] Services +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should serve a basic endpoint from pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected downwardAPI + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:32:46.659: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 +[It] should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:32:46.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd" in namespace "projected-5165" to be "success or failure" +Jun 24 16:32:46.717: INFO: Pod "downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140552ms +Jun 24 16:32:48.721: INFO: Pod "downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008204815s +STEP: Saw pod success +Jun 24 16:32:48.721: INFO: Pod "downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:32:48.725: INFO: Trying to get logs from node minion pod downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:32:48.749: INFO: Waiting for pod downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:32:48.751: INFO: Pod downwardapi-volume-b2e5f2a8-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected downwardAPI + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:32:48.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5165" for this suite. +Jun 24 16:32:54.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:32:54.848: INFO: namespace projected-5165 deletion completed in 6.093272616s + +• [SLOW TEST:8.190 seconds] +[sig-storage] Projected downwardAPI +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 + should provide podname only [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:32:54.848: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-b7c7d05b-969d-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:32:54.895: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd" in namespace "configmap-2236" to be "success or failure" +Jun 24 16:32:54.898: INFO: Pod "pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.860296ms +Jun 24 16:32:56.902: INFO: Pod "pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00683572s +STEP: Saw pod success +Jun 24 16:32:56.903: INFO: Pod "pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:32:56.906: INFO: Trying to get logs from node minion pod pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd container configmap-volume-test: +STEP: delete the pod +Jun 24 16:32:56.927: INFO: Waiting for pod pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:32:56.936: INFO: Pod pod-configmaps-b7c853b4-969d-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:32:56.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-2236" for this suite. +Jun 24 16:33:02.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:33:03.040: INFO: namespace configmap-2236 deletion completed in 6.10193783s + +• [SLOW TEST:8.192 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:33:03.052: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: executing a command with run --rm and attach with stdin +Jun 24 16:33:03.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 --namespace=kubectl-7435 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' +Jun 24 16:33:05.278: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n" +Jun 24 16:33:05.278: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" +STEP: verifying the job e2e-test-rm-busybox-job was deleted +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:33:07.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-7435" for this suite. +Jun 24 16:33:13.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:33:13.389: INFO: namespace kubectl-7435 deletion completed in 6.099603915s + +• [SLOW TEST:10.337 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl run --rm job + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should create a job from an image, then delete the job [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[k8s.io] Probing container + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:33:13.389: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-probe +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 +[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating pod liveness-exec in namespace container-probe-3166 +Jun 24 16:33:15.438: INFO: Started pod liveness-exec in namespace container-probe-3166 +STEP: checking the pod's current state and verifying that restartCount is present +Jun 24 16:33:15.442: INFO: Initial restart count of pod liveness-exec is 0 +STEP: deleting the pod +[AfterEach] [k8s.io] Probing container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:37:15.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-probe-3166" for this suite. +Jun 24 16:37:21.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:37:22.068: INFO: namespace container-probe-3166 deletion completed in 6.102282723s + +• [SLOW TEST:248.678 seconds] +[k8s.io] Probing container +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:37:22.076: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename custom-resource-definition +STEP: Waiting for a default service account to be provisioned in namespace +[It] creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:37:22.119: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +[AfterEach] [sig-api-machinery] CustomResourceDefinition resources + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:37:23.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "custom-resource-definition-7449" for this suite. +Jun 24 16:37:29.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:37:29.325: INFO: namespace custom-resource-definition-7449 deletion completed in 6.089689312s + +• [SLOW TEST:7.250 seconds] +[sig-api-machinery] CustomResourceDefinition resources +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Simple CustomResourceDefinition + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 + creating/deleting custom resource definition objects works [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:37:29.326: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:37:31.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-8044" for this suite. +Jun 24 16:38:09.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:38:09.488: INFO: namespace kubelet-test-8044 deletion completed in 38.089544559s + +• [SLOW TEST:40.163 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox Pod with hostAliases + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 + should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] HostPath + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:38:09.489: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename hostpath +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 +[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test hostPath mode +Jun 24 16:38:09.533: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2841" to be "success or failure" +Jun 24 16:38:09.538: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.023312ms +Jun 24 16:38:11.542: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009083s +Jun 24 16:38:13.546: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013243814s +STEP: Saw pod success +Jun 24 16:38:13.546: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" +Jun 24 16:38:13.551: INFO: Trying to get logs from node minion pod pod-host-path-test container test-container-1: +STEP: delete the pod +Jun 24 16:38:13.572: INFO: Waiting for pod pod-host-path-test to disappear +Jun 24 16:38:13.577: INFO: Pod pod-host-path-test no longer exists +[AfterEach] [sig-storage] HostPath + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:38:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "hostpath-2841" for this suite. +Jun 24 16:38:19.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:38:19.693: INFO: namespace hostpath-2841 deletion completed in 6.103584855s + +• [SLOW TEST:10.205 seconds] +[sig-storage] HostPath +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 + should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:38:19.693: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a test headless service +STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8092.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8092.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8092.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8092.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.198.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.198.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.198.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.198.203_tcp@PTR;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8092.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8092.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8092.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8092.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8092.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8092.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 203.198.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.198.203_udp@PTR;check="$$(dig +tcp +noall +answer +search 203.198.241.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.241.198.203_tcp@PTR;sleep 1; done + +STEP: creating a pod to probe DNS +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 24 16:38:23.791: INFO: Unable to read wheezy_udp@dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.801: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.806: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.842: INFO: Unable to read jessie_udp@dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.848: INFO: Unable to read jessie_tcp@dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.858: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local from pod dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd: the server could not find the requested resource (get pods dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd) +Jun 24 16:38:23.889: INFO: Lookups using dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd failed for: [wheezy_udp@dns-test-service.dns-8092.svc.cluster.local wheezy_tcp@dns-test-service.dns-8092.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local jessie_udp@dns-test-service.dns-8092.svc.cluster.local jessie_tcp@dns-test-service.dns-8092.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8092.svc.cluster.local] + +Jun 24 16:38:29.008: INFO: DNS probes using dns-8092/dns-test-796a8baf-969e-11e9-8bcb-526dc0a539dd succeeded + +STEP: deleting the pod +STEP: deleting the test service +STEP: deleting the test headless service +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:38:29.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-8092" for this suite. +Jun 24 16:38:35.097: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:38:35.184: INFO: namespace dns-8092 deletion completed in 6.096737947s + +• [SLOW TEST:15.491 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide DNS for services [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Pods + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:38:35.185: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: updating the pod +Jun 24 16:38:37.753: INFO: Successfully updated pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd" +Jun 24 16:38:37.753: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd" in namespace "pods-626" to be "terminated due to deadline exceeded" +Jun 24 16:38:37.760: INFO: Pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd": Phase="Running", Reason="", readiness=true. Elapsed: 6.845753ms +Jun 24 16:38:39.764: INFO: Pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd": Phase="Running", Reason="", readiness=true. Elapsed: 2.010758118s +Jun 24 16:38:41.769: INFO: Pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.0152476s +Jun 24 16:38:41.769: INFO: Pod "pod-update-activedeadlineseconds-82a298c5-969e-11e9-8bcb-526dc0a539dd" satisfied condition "terminated due to deadline exceeded" +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:38:41.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-626" for this suite. +Jun 24 16:38:47.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:38:47.875: INFO: namespace pods-626 deletion completed in 6.101681947s + +• [SLOW TEST:12.690 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Update Demo + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:38:47.877: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:265 +[It] should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the initial replication controller +Jun 24 16:38:47.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-3627' +Jun 24 16:38:48.193: INFO: stderr: "" +Jun 24 16:38:48.193: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 24 16:38:48.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3627' +Jun 24 16:38:48.317: INFO: stderr: "" +Jun 24 16:38:48.317: INFO: stdout: "update-demo-nautilus-bps5n update-demo-nautilus-ldqfv " +Jun 24 16:38:48.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-bps5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:38:48.406: INFO: stderr: "" +Jun 24 16:38:48.406: INFO: stdout: "" +Jun 24 16:38:48.406: INFO: update-demo-nautilus-bps5n is created but not running +Jun 24 16:38:53.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3627' +Jun 24 16:38:53.508: INFO: stderr: "" +Jun 24 16:38:53.508: INFO: stdout: "update-demo-nautilus-bps5n update-demo-nautilus-ldqfv " +Jun 24 16:38:53.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-bps5n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:38:53.598: INFO: stderr: "" +Jun 24 16:38:53.598: INFO: stdout: "true" +Jun 24 16:38:53.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-bps5n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:38:53.691: INFO: stderr: "" +Jun 24 16:38:53.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 24 16:38:53.691: INFO: validating pod update-demo-nautilus-bps5n +Jun 24 16:38:53.699: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 24 16:38:53.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 24 16:38:53.699: INFO: update-demo-nautilus-bps5n is verified up and running +Jun 24 16:38:53.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-ldqfv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:38:53.792: INFO: stderr: "" +Jun 24 16:38:53.792: INFO: stdout: "true" +Jun 24 16:38:53.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-nautilus-ldqfv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:38:53.884: INFO: stderr: "" +Jun 24 16:38:53.884: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" +Jun 24 16:38:53.884: INFO: validating pod update-demo-nautilus-ldqfv +Jun 24 16:38:53.894: INFO: got data: { + "image": "nautilus.jpg" +} + +Jun 24 16:38:53.894: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . +Jun 24 16:38:53.894: INFO: update-demo-nautilus-ldqfv is verified up and running +STEP: rolling-update to new replication controller +Jun 24 16:38:53.896: INFO: scanned /root for discovery docs: +Jun 24 16:38:53.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3627' +Jun 24 16:39:16.434: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" +Jun 24 16:39:16.434: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" +STEP: waiting for all containers in name=update-demo pods to come up. +Jun 24 16:39:16.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3627' +Jun 24 16:39:16.544: INFO: stderr: "" +Jun 24 16:39:16.544: INFO: stdout: "update-demo-kitten-q4q89 update-demo-kitten-wmqv9 " +Jun 24 16:39:16.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-kitten-q4q89 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:39:16.670: INFO: stderr: "" +Jun 24 16:39:16.670: INFO: stdout: "true" +Jun 24 16:39:16.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-kitten-q4q89 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:39:16.756: INFO: stderr: "" +Jun 24 16:39:16.756: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Jun 24 16:39:16.756: INFO: validating pod update-demo-kitten-q4q89 +Jun 24 16:39:16.779: INFO: got data: { + "image": "kitten.jpg" +} + +Jun 24 16:39:16.779: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Jun 24 16:39:16.779: INFO: update-demo-kitten-q4q89 is verified up and running +Jun 24 16:39:16.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-kitten-wmqv9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:39:16.875: INFO: stderr: "" +Jun 24 16:39:16.875: INFO: stdout: "true" +Jun 24 16:39:16.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods update-demo-kitten-wmqv9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3627' +Jun 24 16:39:16.963: INFO: stderr: "" +Jun 24 16:39:16.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" +Jun 24 16:39:16.964: INFO: validating pod update-demo-kitten-wmqv9 +Jun 24 16:39:16.973: INFO: got data: { + "image": "kitten.jpg" +} + +Jun 24 16:39:16.973: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . +Jun 24 16:39:16.973: INFO: update-demo-kitten-wmqv9 is verified up and running +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:39:16.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-3627" for this suite. +Jun 24 16:39:38.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:39:39.072: INFO: namespace kubectl-3627 deletion completed in 22.095525269s + +• [SLOW TEST:51.195 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Update Demo + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should do a rolling update of a replication controller [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Kubelet when scheduling a read only busybox container + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:39:39.073: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:39:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-652" for this suite. +Jun 24 16:40:31.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:40:31.270: INFO: namespace kubelet-test-652 deletion completed in 50.113675238s + +• [SLOW TEST:52.198 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a read only busybox container + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 + should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-api-machinery] Watchers + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:40:31.271: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with label A +STEP: creating a watch on configmaps with label B +STEP: creating a watch on configmaps with label A or B +STEP: creating a configmap with label A and ensuring the correct watchers observe the notification +Jun 24 16:40:31.309: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13887,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 24 16:40:31.309: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13887,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: modifying configmap A and ensuring the correct watchers observe the notification +Jun 24 16:40:41.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13903,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 24 16:40:41.316: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13903,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying configmap A again and ensuring the correct watchers observe the notification +Jun 24 16:40:51.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13919,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 24 16:40:51.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13919,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: deleting configmap A and ensuring the correct watchers observe the notification +Jun 24 16:41:01.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13935,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 24 16:41:01.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-a,UID:c7d41ef8-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13935,Generation:0,CreationTimestamp:2019-06-24 16:40:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +STEP: creating a configmap with label B and ensuring the correct watchers observe the notification +Jun 24 16:41:11.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-b,UID:dfaff19f-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13952,Generation:0,CreationTimestamp:2019-06-24 16:41:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 24 16:41:11.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-b,UID:dfaff19f-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13952,Generation:0,CreationTimestamp:2019-06-24 16:41:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +STEP: deleting configmap B and ensuring the correct watchers observe the notification +Jun 24 16:41:21.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-b,UID:dfaff19f-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13968,Generation:0,CreationTimestamp:2019-06-24 16:41:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 24 16:41:21.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8046,SelfLink:/api/v1/namespaces/watch-8046/configmaps/e2e-watch-test-configmap-b,UID:dfaff19f-969e-11e9-b70d-fa163ef83c94,ResourceVersion:13968,Generation:0,CreationTimestamp:2019-06-24 16:41:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:41:31.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-8046" for this suite. +Jun 24 16:41:37.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:41:37.460: INFO: namespace watch-8046 deletion completed in 6.10754625s + +• [SLOW TEST:66.190 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe add, update, and delete watch notifications on configmaps [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir wrapper volumes + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:41:37.462: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir-wrapper +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating 50 configmaps +STEP: Creating RC which spawns configmap-volume pods +Jun 24 16:41:37.748: INFO: Pod name wrapped-volume-race-ef6bd31e-969e-11e9-8bcb-526dc0a539dd: Found 0 pods out of 5 +Jun 24 16:41:42.758: INFO: Pod name wrapped-volume-race-ef6bd31e-969e-11e9-8bcb-526dc0a539dd: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-ef6bd31e-969e-11e9-8bcb-526dc0a539dd in namespace emptydir-wrapper-5116, will wait for the garbage collector to delete the pods +Jun 24 16:41:52.859: INFO: Deleting ReplicationController wrapped-volume-race-ef6bd31e-969e-11e9-8bcb-526dc0a539dd took: 11.535545ms +Jun 24 16:41:53.159: INFO: Terminating ReplicationController wrapped-volume-race-ef6bd31e-969e-11e9-8bcb-526dc0a539dd pods took: 300.417016ms +STEP: Creating RC which spawns configmap-volume pods +Jun 24 16:42:36.875: INFO: Pod name wrapped-volume-race-12a9e1c6-969f-11e9-8bcb-526dc0a539dd: Found 0 pods out of 5 +Jun 24 16:42:41.883: INFO: Pod name wrapped-volume-race-12a9e1c6-969f-11e9-8bcb-526dc0a539dd: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-12a9e1c6-969f-11e9-8bcb-526dc0a539dd in namespace emptydir-wrapper-5116, will wait for the garbage collector to delete the pods +Jun 24 16:42:51.984: INFO: Deleting ReplicationController wrapped-volume-race-12a9e1c6-969f-11e9-8bcb-526dc0a539dd took: 10.408039ms +Jun 24 16:42:52.285: INFO: Terminating ReplicationController wrapped-volume-race-12a9e1c6-969f-11e9-8bcb-526dc0a539dd pods took: 300.367776ms +STEP: Creating RC which spawns configmap-volume pods +Jun 24 16:43:37.206: INFO: Pod name wrapped-volume-race-369ed405-969f-11e9-8bcb-526dc0a539dd: Found 0 pods out of 5 +Jun 24 16:43:42.214: INFO: Pod name wrapped-volume-race-369ed405-969f-11e9-8bcb-526dc0a539dd: Found 5 pods out of 5 +STEP: Ensuring each pod is running +STEP: deleting ReplicationController wrapped-volume-race-369ed405-969f-11e9-8bcb-526dc0a539dd in namespace emptydir-wrapper-5116, will wait for the garbage collector to delete the pods +Jun 24 16:43:54.349: INFO: Deleting ReplicationController wrapped-volume-race-369ed405-969f-11e9-8bcb-526dc0a539dd took: 6.828991ms +Jun 24 16:43:54.649: INFO: Terminating ReplicationController wrapped-volume-race-369ed405-969f-11e9-8bcb-526dc0a539dd pods took: 300.548953ms +STEP: Cleaning up the configMaps +[AfterEach] [sig-storage] EmptyDir wrapper volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:44:37.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-wrapper-5116" for this suite. +Jun 24 16:44:45.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:44:45.282: INFO: namespace emptydir-wrapper-5116 deletion completed in 8.099177449s + +• [SLOW TEST:187.821 seconds] +[sig-storage] EmptyDir wrapper volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 + should not cause race condition when used for configmaps [Serial] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:44:45.282: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jun 24 16:44:45.338: INFO: Waiting up to 5m0s for pod "pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd" in namespace "emptydir-6168" to be "success or failure" +Jun 24 16:44:45.342: INFO: Pod "pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.857682ms +Jun 24 16:44:47.346: INFO: Pod "pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008037494s +Jun 24 16:44:49.350: INFO: Pod "pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012094437s +STEP: Saw pod success +Jun 24 16:44:49.350: INFO: Pod "pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:44:49.354: INFO: Trying to get logs from node minion pod pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:44:49.382: INFO: Waiting for pod pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:44:49.384: INFO: Pod pod-5f3c7a94-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:44:49.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-6168" for this suite. +Jun 24 16:44:55.399: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:44:55.479: INFO: namespace emptydir-6168 deletion completed in 6.091357366s + +• [SLOW TEST:10.197 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-api-machinery] Aggregator + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:44:55.481: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename aggregator +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:69 +[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Registering the sample API server. +Jun 24 16:44:56.403: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set +Jun 24 16:44:58.469: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:45:00.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:45:02.474: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991496, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-65db6755fc\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:45:05.715: INFO: Waited 1.238086822s for the sample-apiserver to be ready to handle requests. +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:60 +[AfterEach] [sig-api-machinery] Aggregator + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:45:06.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "aggregator-6534" for this suite. +Jun 24 16:45:12.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:45:12.395: INFO: namespace aggregator-6534 deletion completed in 6.250367961s + +• [SLOW TEST:16.914 seconds] +[sig-api-machinery] Aggregator +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[k8s.io] Pods + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:45:12.395: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:45:12.434: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: creating the pod +STEP: submitting the pod to kubernetes +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:45:14.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-1549" for this suite. +Jun 24 16:46:00.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:46:00.710: INFO: namespace pods-1549 deletion completed in 46.099792157s + +• [SLOW TEST:48.316 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should support remote command execution over websockets [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Docker Containers + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:46:00.715: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename containers +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test override arguments +Jun 24 16:46:00.754: INFO: Waiting up to 5m0s for pod "client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd" in namespace "containers-2971" to be "success or failure" +Jun 24 16:46:00.760: INFO: Pod "client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.588536ms +Jun 24 16:46:02.764: INFO: Pod "client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010043588s +STEP: Saw pod success +Jun 24 16:46:02.764: INFO: Pod "client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:46:02.768: INFO: Trying to get logs from node minion pod client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:46:02.797: INFO: Waiting for pod client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:46:02.800: INFO: Pod client-containers-8c30afe1-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Docker Containers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:46:02.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "containers-2971" for this suite. +Jun 24 16:46:08.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:46:08.899: INFO: namespace containers-2971 deletion completed in 6.095408702s + +• [SLOW TEST:8.185 seconds] +[k8s.io] Docker Containers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:46:08.906: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:46:08.953: INFO: Pod name rollover-pod: Found 0 pods out of 1 +Jun 24 16:46:13.958: INFO: Pod name rollover-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 24 16:46:13.958: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready +Jun 24 16:46:15.962: INFO: Creating deployment "test-rollover-deployment" +Jun 24 16:46:15.971: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations +Jun 24 16:46:17.979: INFO: Check revision of new replica set for deployment "test-rollover-deployment" +Jun 24 16:46:17.985: INFO: Ensure that both replica sets have 1 created replica +Jun 24 16:46:17.991: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update +Jun 24 16:46:17.998: INFO: Updating deployment test-rollover-deployment +Jun 24 16:46:17.998: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller +Jun 24 16:46:20.004: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 +Jun 24 16:46:20.017: INFO: Make sure deployment "test-rollover-deployment" is complete +Jun 24 16:46:20.030: INFO: all replica sets need to contain the pod-template-hash label +Jun 24 16:46:20.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991580, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:22.038: INFO: all replica sets need to contain the pod-template-hash label +Jun 24 16:46:22.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991580, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:24.038: INFO: all replica sets need to contain the pod-template-hash label +Jun 24 16:46:24.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991580, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:26.038: INFO: all replica sets need to contain the pod-template-hash label +Jun 24 16:46:26.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991580, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:28.038: INFO: all replica sets need to contain the pod-template-hash label +Jun 24 16:46:28.039: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991580, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:30.045: INFO: +Jun 24 16:46:30.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991576, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991590, loc:(*time.Location)(0x8a1a0e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696991575, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-766b4d6c9d\" is progressing."}}, CollisionCount:(*int32)(nil)} +Jun 24 16:46:32.038: INFO: +Jun 24 16:46:32.038: INFO: Ensure that both old replica sets have no replicas +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 24 16:46:32.048: INFO: Deployment "test-rollover-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2643,SelfLink:/apis/apps/v1/namespaces/deployment-2643/deployments/test-rollover-deployment,UID:95426a8b-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15511,Generation:2,CreationTimestamp:2019-06-24 16:46:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2019-06-24 16:46:16 +0000 UTC 2019-06-24 16:46:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2019-06-24 16:46:30 +0000 UTC 2019-06-24 16:46:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-766b4d6c9d" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} + +Jun 24 16:46:32.051: INFO: New ReplicaSet "test-rollover-deployment-766b4d6c9d" of Deployment "test-rollover-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d,GenerateName:,Namespace:deployment-2643,SelfLink:/apis/apps/v1/namespaces/deployment-2643/replicasets/test-rollover-deployment-766b4d6c9d,UID:9678fd03-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15501,Generation:2,CreationTimestamp:2019-06-24 16:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 95426a8b-969f-11e9-b70d-fa163ef83c94 0xc002abc437 0xc002abc438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 24 16:46:32.051: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": +Jun 24 16:46:32.051: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2643,SelfLink:/apis/apps/v1/namespaces/deployment-2643/replicasets/test-rollover-controller,UID:9113ea7b-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15510,Generation:2,CreationTimestamp:2019-06-24 16:46:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 95426a8b-969f-11e9-b70d-fa163ef83c94 0xc002abc287 0xc002abc288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:46:32.052: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-6455657675,GenerateName:,Namespace:deployment-2643,SelfLink:/apis/apps/v1/namespaces/deployment-2643/replicasets/test-rollover-deployment-6455657675,UID:9544748e-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15472,Generation:2,CreationTimestamp:2019-06-24 16:46:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 95426a8b-969f-11e9-b70d-fa163ef83c94 0xc002abc357 0xc002abc358}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 6455657675,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:46:32.055: INFO: Pod "test-rollover-deployment-766b4d6c9d-ls8bd" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-766b4d6c9d-ls8bd,GenerateName:test-rollover-deployment-766b4d6c9d-,Namespace:deployment-2643,SelfLink:/api/v1/namespaces/deployment-2643/pods/test-rollover-deployment-766b4d6c9d-ls8bd,UID:967dfb28-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15483,Generation:0,CreationTimestamp:2019-06-24 16:46:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 766b4d6c9d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-766b4d6c9d 9678fd03-969f-11e9-b70d-fa163ef83c94 0xc002abcf87 0xc002abcf88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wvzpn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wvzpn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wvzpn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002abd020} {node.kubernetes.io/unreachable Exists NoExecute 0xc002abd040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:19 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:18 +0000 UTC }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.6,StartTime:2019-06-24 16:46:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2019-06-24 16:46:19 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://ec7a6b28f0f1467f86999f35d77575800154f7844d61cb6a6237e049bc61aa9f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:46:32.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-2643" for this suite. +Jun 24 16:46:38.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:46:38.156: INFO: namespace deployment-2643 deletion completed in 6.097800657s + +• [SLOW TEST:29.250 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should support rollover [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSS +------------------------------ +[sig-network] DNS + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:46:38.157: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename dns +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2711.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2711.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done + +STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2711.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2711.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2711.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done + +STEP: creating a pod to probe /etc/hosts +STEP: submitting the pod to kubernetes +STEP: retrieving the pod +STEP: looking for the results for each expected name from probers +Jun 24 16:46:42.261: INFO: DNS probes using dns-2711/dns-test-a2829778-969f-11e9-8bcb-526dc0a539dd succeeded + +STEP: deleting the pod +[AfterEach] [sig-network] DNS + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:46:42.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "dns-2711" for this suite. +Jun 24 16:46:48.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:46:48.390: INFO: namespace dns-2711 deletion completed in 6.10015397s + +• [SLOW TEST:10.233 seconds] +[sig-network] DNS +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:46:48.391: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-a89cf2ce-969f-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:46:48.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd" in namespace "projected-5724" to be "success or failure" +Jun 24 16:46:48.449: INFO: Pod "pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257253ms +Jun 24 16:46:50.453: INFO: Pod "pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010100625s +Jun 24 16:46:52.457: INFO: Pod "pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013982253s +STEP: Saw pod success +Jun 24 16:46:52.457: INFO: Pod "pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:46:52.461: INFO: Trying to get logs from node minion pod pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: +STEP: delete the pod +Jun 24 16:46:52.489: INFO: Waiting for pod pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:46:52.492: INFO: Pod pod-projected-configmaps-a89d6897-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:46:52.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5724" for this suite. +Jun 24 16:46:58.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:46:58.599: INFO: namespace projected-5724 deletion completed in 6.102770476s + +• [SLOW TEST:10.208 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-apps] Deployment + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:46:58.599: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename deployment +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:65 +[It] deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:46:58.638: INFO: Pod name cleanup-pod: Found 0 pods out of 1 +Jun 24 16:47:03.642: INFO: Pod name cleanup-pod: Found 1 pods out of 1 +STEP: ensuring each pod is running +Jun 24 16:47:03.642: INFO: Creating deployment test-cleanup-deployment +STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:59 +Jun 24 16:47:03.666: INFO: Deployment "test-cleanup-deployment": +&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-9269,SelfLink:/apis/apps/v1/namespaces/deployment-9269/deployments/test-cleanup-deployment,UID:b1ae9ae3-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15682,Generation:1,CreationTimestamp:2019-06-24 16:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} + +Jun 24 16:47:03.677: INFO: New ReplicaSet "test-cleanup-deployment-55cbfbc8f5" of Deployment "test-cleanup-deployment": +&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55cbfbc8f5,GenerateName:,Namespace:deployment-9269,SelfLink:/apis/apps/v1/namespaces/deployment-9269/replicasets/test-cleanup-deployment-55cbfbc8f5,UID:b1b10ed7-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15684,Generation:1,CreationTimestamp:2019-06-24 16:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b1ae9ae3-969f-11e9-b70d-fa163ef83c94 0xc002088cc7 0xc002088cc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} +Jun 24 16:47:03.677: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": +Jun 24 16:47:03.677: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-9269,SelfLink:/apis/apps/v1/namespaces/deployment-9269/replicasets/test-cleanup-controller,UID:aeb151c1-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15683,Generation:1,CreationTimestamp:2019-06-24 16:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b1ae9ae3-969f-11e9-b70d-fa163ef83c94 0xc002088bf7 0xc002088bf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} +Jun 24 16:47:03.684: INFO: Pod "test-cleanup-controller-4z7m7" is available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-4z7m7,GenerateName:test-cleanup-controller-,Namespace:deployment-9269,SelfLink:/api/v1/namespaces/deployment-9269/pods/test-cleanup-controller-4z7m7,UID:aeb22343-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15675,Generation:0,CreationTimestamp:2019-06-24 16:46:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller aeb151c1-969f-11e9-b70d-fa163ef83c94 0xc0020895a7 0xc0020895a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xszl9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xszl9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-xszl9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002089620} {node.kubernetes.io/unreachable Exists NoExecute 0xc002089640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:46:58 +0000 UTC }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.5,StartTime:2019-06-24 16:46:58 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2019-06-24 16:46:59 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://b66f865f5777a98201b1c82a8b5d515fe98f5f9ddc58f8b33edef0613b4a24ab}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +Jun 24 16:47:03.685: INFO: Pod "test-cleanup-deployment-55cbfbc8f5-dsrvq" is not available: +&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55cbfbc8f5-dsrvq,GenerateName:test-cleanup-deployment-55cbfbc8f5-,Namespace:deployment-9269,SelfLink:/api/v1/namespaces/deployment-9269/pods/test-cleanup-deployment-55cbfbc8f5-dsrvq,UID:b1b1c341-969f-11e9-b70d-fa163ef83c94,ResourceVersion:15687,Generation:0,CreationTimestamp:2019-06-24 16:47:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55cbfbc8f5,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55cbfbc8f5 b1b10ed7-969f-11e9-b70d-fa163ef83c94 0xc002089717 0xc002089718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-xszl9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-xszl9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-xszl9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002089790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0020897b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:47:03 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} +[AfterEach] [sig-apps] Deployment + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:47:03.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "deployment-9269" for this suite. +Jun 24 16:47:09.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:47:09.808: INFO: namespace deployment-9269 deletion completed in 6.10978671s + +• [SLOW TEST:11.209 seconds] +[sig-apps] Deployment +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + deployment should delete old replica sets [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] ConfigMap + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:47:09.809: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name configmap-test-volume-map-b55fd317-969f-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:47:09.855: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd" in namespace "configmap-3157" to be "success or failure" +Jun 24 16:47:09.861: INFO: Pod "pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191573ms +Jun 24 16:47:11.865: INFO: Pod "pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010339074s +STEP: Saw pod success +Jun 24 16:47:11.866: INFO: Pod "pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:47:11.869: INFO: Trying to get logs from node minion pod pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd container configmap-volume-test: +STEP: delete the pod +Jun 24 16:47:11.896: INFO: Waiting for pod pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:47:11.901: INFO: Pod pod-configmaps-b5607d0e-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:47:11.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-3157" for this suite. +Jun 24 16:47:17.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:47:17.999: INFO: namespace configmap-3157 deletion completed in 6.093096056s + +• [SLOW TEST:8.190 seconds] +[sig-storage] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 + should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-storage] EmptyDir volumes + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:47:17.999: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir volume type on node default medium +Jun 24 16:47:18.047: INFO: Waiting up to 5m0s for pod "pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd" in namespace "emptydir-9717" to be "success or failure" +Jun 24 16:47:18.050: INFO: Pod "pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.967082ms +Jun 24 16:47:20.054: INFO: Pod "pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007148225s +STEP: Saw pod success +Jun 24 16:47:20.054: INFO: Pod "pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:47:20.058: INFO: Trying to get logs from node minion pod pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:47:20.084: INFO: Waiting for pod pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:47:20.087: INFO: Pod pod-ba41b91c-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:47:20.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-9717" for this suite. +Jun 24 16:47:26.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:47:26.193: INFO: namespace emptydir-9717 deletion completed in 6.102721616s + +• [SLOW TEST:8.194 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Downward API volume + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:47:26.194: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating the pod +Jun 24 16:47:28.777: INFO: Successfully updated pod "annotationupdatebf23b23d-969f-11e9-8bcb-526dc0a539dd" +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:47:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-336" for this suite. +Jun 24 16:47:52.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:47:52.915: INFO: namespace downward-api-336 deletion completed in 22.103059528s + +• [SLOW TEST:26.721 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should update annotations on modification [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:47:52.918: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:47:53.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd" in namespace "downward-api-5656" to be "success or failure" +Jun 24 16:47:53.024: INFO: Pod "downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245519ms +Jun 24 16:47:55.028: INFO: Pod "downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006635576s +STEP: Saw pod success +Jun 24 16:47:55.028: INFO: Pod "downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:47:55.031: INFO: Trying to get logs from node minion pod downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:47:55.054: INFO: Waiting for pod downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:47:55.059: INFO: Pod downwardapi-volume-cf1a7b96-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:47:55.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-5656" for this suite. +Jun 24 16:48:01.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:48:01.158: INFO: namespace downward-api-5656 deletion completed in 6.095886248s + +• [SLOW TEST:8.241 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide container's memory request [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[k8s.io] Variable Expansion + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:48:01.158: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename var-expansion +STEP: Waiting for a default service account to be provisioned in namespace +[It] should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test substitution in container's args +Jun 24 16:48:01.200: INFO: Waiting up to 5m0s for pod "var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd" in namespace "var-expansion-5955" to be "success or failure" +Jun 24 16:48:01.204: INFO: Pod "var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.923785ms +Jun 24 16:48:03.208: INFO: Pod "var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007971645s +STEP: Saw pod success +Jun 24 16:48:03.208: INFO: Pod "var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:48:03.212: INFO: Trying to get logs from node minion pod var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 16:48:03.239: INFO: Waiting for pod var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:48:03.242: INFO: Pod var-expansion-d3fb3e09-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Variable Expansion + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:48:03.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "var-expansion-5955" for this suite. +Jun 24 16:48:09.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:48:09.341: INFO: namespace var-expansion-5955 deletion completed in 6.095346646s + +• [SLOW TEST:8.183 seconds] +[k8s.io] Variable Expansion +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should allow substituting values in a container's args [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +[sig-storage] Projected secret + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:48:09.341: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name projected-secret-test-d8e3fe14-969f-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:48:09.442: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd" in namespace "projected-9445" to be "success or failure" +Jun 24 16:48:09.451: INFO: Pod "pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032099ms +Jun 24 16:48:11.455: INFO: Pod "pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011641599s +STEP: Saw pod success +Jun 24 16:48:11.455: INFO: Pod "pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:48:11.459: INFO: Trying to get logs from node minion pod pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd container secret-volume-test: +STEP: delete the pod +Jun 24 16:48:11.483: INFO: Waiting for pod pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:48:11.486: INFO: Pod pod-projected-secrets-d8e4bd86-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected secret + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:48:11.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-9445" for this suite. +Jun 24 16:48:17.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:48:17.595: INFO: namespace projected-9445 deletion completed in 6.105821047s + +• [SLOW TEST:8.254 seconds] +[sig-storage] Projected secret +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 + should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[sig-node] Downward API + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:48:17.595: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 24 16:48:17.638: INFO: Waiting up to 5m0s for pod "downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd" in namespace "downward-api-7702" to be "success or failure" +Jun 24 16:48:17.641: INFO: Pod "downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.109369ms +Jun 24 16:48:19.645: INFO: Pod "downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006985499s +STEP: Saw pod success +Jun 24 16:48:19.645: INFO: Pod "downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:48:19.648: INFO: Trying to get logs from node minion pod downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 16:48:19.674: INFO: Waiting for pod downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:48:19.677: INFO: Pod downward-api-ddc79946-969f-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:48:19.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-7702" for this suite. +Jun 24 16:48:25.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:48:25.774: INFO: namespace downward-api-7702 deletion completed in 6.093483115s + +• [SLOW TEST:8.179 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:48:25.776: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:48:29.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-2378" for this suite. +Jun 24 16:48:35.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:48:35.936: INFO: namespace kubelet-test-2378 deletion completed in 6.101521337s + +• [SLOW TEST:10.160 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should have an terminated reason [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:48:35.937: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: delete the pod with lifecycle hook +Jun 24 16:48:42.024: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 24 16:48:42.036: INFO: Pod pod-with-prestop-http-hook still exists +Jun 24 16:48:44.037: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 24 16:48:44.041: INFO: Pod pod-with-prestop-http-hook still exists +Jun 24 16:48:46.037: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 24 16:48:46.041: INFO: Pod pod-with-prestop-http-hook still exists +Jun 24 16:48:48.037: INFO: Waiting for pod pod-with-prestop-http-hook to disappear +Jun 24 16:48:48.040: INFO: Pod pod-with-prestop-http-hook no longer exists +STEP: check prestop hook +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:48:48.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4042" for this suite. +Jun 24 16:49:10.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:10.155: INFO: namespace container-lifecycle-hook-4042 deletion completed in 22.097188797s + +• [SLOW TEST:34.219 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute prestop http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl replace + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:49:10.156: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[BeforeEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1619 +[It] should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: running the image docker.io/library/nginx:1.14-alpine +Jun 24 16:49:10.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-4922' +Jun 24 16:49:10.851: INFO: stderr: "" +Jun 24 16:49:10.851: INFO: stdout: "pod/e2e-test-nginx-pod created\n" +STEP: verifying the pod e2e-test-nginx-pod is running +STEP: verifying the pod e2e-test-nginx-pod was created +Jun 24 16:49:15.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pod e2e-test-nginx-pod --namespace=kubectl-4922 -o json' +Jun 24 16:49:16.000: INFO: stderr: "" +Jun 24 16:49:16.000: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2019-06-24T16:49:10Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-4922\",\n \"resourceVersion\": \"16168\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4922/pods/e2e-test-nginx-pod\",\n \"uid\": \"fd7de833-969f-11e9-b70d-fa163ef83c94\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4lqxn\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"minion\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4lqxn\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4lqxn\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-24T16:49:10Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-24T16:49:12Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-24T16:49:12Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2019-06-24T16:49:10Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://4801a4228b9cc403b7811c0d2ecac995933c834b142f757e8461a26ab8ef9d6a\",\n \"image\": \"nginx:1.14-alpine\",\n \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2019-06-24T16:49:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.1.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.251.128.5\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2019-06-24T16:49:10Z\"\n }\n}\n" +STEP: replace the image in the pod +Jun 24 16:49:16.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 replace -f - --namespace=kubectl-4922' +Jun 24 16:49:16.345: INFO: stderr: "" +Jun 24 16:49:16.345: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" +STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 +[AfterEach] [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1624 +Jun 24 16:49:16.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete pods e2e-test-nginx-pod --namespace=kubectl-4922' +Jun 24 16:49:18.985: INFO: stderr: "" +Jun 24 16:49:18.985: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:49:18.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4922" for this suite. +Jun 24 16:49:24.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:25.095: INFO: namespace kubectl-4922 deletion completed in 6.107170761s + +• [SLOW TEST:14.940 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl replace + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should update a single-container pod's image [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] Secrets + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:49:25.096: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename secrets +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating secret with name secret-test-060307ad-96a0-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume secrets +Jun 24 16:49:25.182: INFO: Waiting up to 5m0s for pod "pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd" in namespace "secrets-7320" to be "success or failure" +Jun 24 16:49:25.187: INFO: Pod "pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.94748ms +Jun 24 16:49:27.191: INFO: Pod "pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009027068s +STEP: Saw pod success +Jun 24 16:49:27.191: INFO: Pod "pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:49:27.195: INFO: Trying to get logs from node minion pod pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd container secret-volume-test: +STEP: delete the pod +Jun 24 16:49:27.224: INFO: Waiting for pod pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:49:27.227: INFO: Pod pod-secrets-0609ed1e-96a0-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Secrets + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:49:27.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "secrets-7320" for this suite. +Jun 24 16:49:33.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:33.329: INFO: namespace secrets-7320 deletion completed in 6.09940097s +STEP: Destroying namespace "secret-namespace-4944" for this suite. +Jun 24 16:49:39.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:39.438: INFO: namespace secret-namespace-4944 deletion completed in 6.108803634s + +• [SLOW TEST:14.342 seconds] +[sig-storage] Secrets +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 + should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-node] ConfigMap + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:49:39.438: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename configmap +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap configmap-9307/configmap-test-0e91a39c-96a0-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:49:39.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd" in namespace "configmap-9307" to be "success or failure" +Jun 24 16:49:39.502: INFO: Pod "pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.168952ms +Jun 24 16:49:41.506: INFO: Pod "pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011026273s +STEP: Saw pod success +Jun 24 16:49:41.506: INFO: Pod "pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:49:41.509: INFO: Trying to get logs from node minion pod pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd container env-test: +STEP: delete the pod +Jun 24 16:49:41.532: INFO: Waiting for pod pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:49:41.549: INFO: Pod pod-configmaps-0e92213c-96a0-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-node] ConfigMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:49:41.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "configmap-9307" for this suite. +Jun 24 16:49:47.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:47.670: INFO: namespace configmap-9307 deletion completed in 6.11777199s + +• [SLOW TEST:8.232 seconds] +[sig-node] ConfigMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32 + should be consumable via the environment [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-node] Downward API + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:49:47.671: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[It] should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward api env vars +Jun 24 16:49:47.712: INFO: Waiting up to 5m0s for pod "downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd" in namespace "downward-api-4458" to be "success or failure" +Jun 24 16:49:47.723: INFO: Pod "downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213664ms +Jun 24 16:49:49.728: INFO: Pod "downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015110543s +STEP: Saw pod success +Jun 24 16:49:49.728: INFO: Pod "downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:49:49.732: INFO: Trying to get logs from node minion pod downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd container dapi-container: +STEP: delete the pod +Jun 24 16:49:49.764: INFO: Waiting for pod downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:49:49.769: INFO: Pod downward-api-1377df06-96a0-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-node] Downward API + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:49:49.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-4458" for this suite. +Jun 24 16:49:55.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:49:55.875: INFO: namespace downward-api-4458 deletion completed in 6.097193167s + +• [SLOW TEST:8.204 seconds] +[sig-node] Downward API +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:38 + should provide host IP as an env var [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class + should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:49:55.875: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:177 +[It] should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying QOS class is set on the pod +[AfterEach] [k8s.io] [sig-node] Pods Extended + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:49:55.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-2554" for this suite. +Jun 24 16:50:17.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:50:18.027: INFO: namespace pods-2554 deletion completed in 22.090899484s + +• [SLOW TEST:22.152 seconds] +[k8s.io] [sig-node] Pods Extended +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + [k8s.io] Pods Set QOS Class + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be submitted and removed [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[sig-storage] Projected configMap + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:50:18.027: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename projected +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating configMap with name projected-configmap-test-volume-259115ee-96a0-11e9-8bcb-526dc0a539dd +STEP: Creating a pod to test consume configMaps +Jun 24 16:50:18.081: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd" in namespace "projected-5577" to be "success or failure" +Jun 24 16:50:18.105: INFO: Pod "pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 24.163273ms +Jun 24 16:50:20.109: INFO: Pod "pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02816552s +STEP: Saw pod success +Jun 24 16:50:20.109: INFO: Pod "pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:50:20.114: INFO: Trying to get logs from node minion pod pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd container projected-configmap-volume-test: +STEP: delete the pod +Jun 24 16:50:20.137: INFO: Waiting for pod pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:50:20.141: INFO: Pod pod-projected-configmaps-2591ae44-96a0-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Projected configMap + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:50:20.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "projected-5577" for this suite. +Jun 24 16:50:26.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:50:26.246: INFO: namespace projected-5577 deletion completed in 6.098277603s + +• [SLOW TEST:8.219 seconds] +[sig-storage] Projected configMap +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 + should be consumable from pods in volume [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] [sig-node] Events + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:50:26.247: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename events +STEP: Waiting for a default service account to be provisioned in namespace +[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +STEP: submitting the pod to kubernetes +STEP: verifying the pod is in kubernetes +STEP: retrieving the pod +Jun 24 16:50:28.312: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-2a75d385-96a0-11e9-8bcb-526dc0a539dd,GenerateName:,Namespace:events-5770,SelfLink:/api/v1/namespaces/events-5770/pods/send-events-2a75d385-96a0-11e9-8bcb-526dc0a539dd,UID:2a7679b6-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16448,Generation:0,CreationTimestamp:2019-06-24 16:50:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 280954326,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-w7t6n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-w7t6n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-w7t6n true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:minion,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002088d60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002088d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:50:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:50:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:50:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-06-24 16:50:26 +0000 UTC }],Message:,Reason:,HostIP:10.1.0.12,PodIP:10.251.128.5,StartTime:2019-06-24 16:50:26 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2019-06-24 16:50:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://21bf12b41730328a4260c1d56659dea29459e3cc7152a79fb3e57c459fdc8175}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} + +STEP: checking for scheduler event about the pod +Jun 24 16:50:30.317: INFO: Saw scheduler event for our pod. +STEP: checking for kubelet event about the pod +Jun 24 16:50:32.321: INFO: Saw kubelet event for our pod. +STEP: deleting the pod +[AfterEach] [k8s.io] [sig-node] Events + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:50:32.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "events-5770" for this suite. +Jun 24 16:51:10.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:51:10.442: INFO: namespace events-5770 deletion completed in 38.105572506s + +• [SLOW TEST:44.196 seconds] +[k8s.io] [sig-node] Events +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-storage] EmptyDir volumes + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:51:10.443: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename emptydir +STEP: Waiting for a default service account to be provisioned in namespace +[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test emptydir 0777 on tmpfs +Jun 24 16:51:10.489: INFO: Waiting up to 5m0s for pod "pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd" in namespace "emptydir-2490" to be "success or failure" +Jun 24 16:51:10.493: INFO: Pod "pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.654103ms +Jun 24 16:51:12.496: INFO: Pod "pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007562429s +STEP: Saw pod success +Jun 24 16:51:12.497: INFO: Pod "pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:51:12.500: INFO: Trying to get logs from node minion pod pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd container test-container: +STEP: delete the pod +Jun 24 16:51:12.526: INFO: Waiting for pod pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:51:12.530: INFO: Pod pod-44ce2fbc-96a0-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] EmptyDir volumes + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:51:12.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "emptydir-2490" for this suite. +Jun 24 16:51:18.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:51:18.633: INFO: namespace emptydir-2490 deletion completed in 6.091569828s + +• [SLOW TEST:8.190 seconds] +[sig-storage] EmptyDir volumes +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 + should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSS +------------------------------ +[sig-api-machinery] Watchers + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:51:18.644: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename watch +STEP: Waiting for a default service account to be provisioned in namespace +[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating a watch on configmaps with a certain label +STEP: creating a new configmap +STEP: modifying the configmap once +STEP: changing the label value of the configmap +STEP: Expecting to observe a delete notification for the watched object +Jun 24 16:51:18.702: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16568,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} +Jun 24 16:51:18.702: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16569,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +Jun 24 16:51:18.702: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16570,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} +STEP: modifying the configmap a second time +STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements +STEP: changing the label value of the configmap back +STEP: modifying the configmap a third time +STEP: deleting the configmap +STEP: Expecting to observe an add notification for the watched object when the label value was restored +Jun 24 16:51:28.736: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16587,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} +Jun 24 16:51:28.736: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16588,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +Jun 24 16:51:28.736: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1447,SelfLink:/api/v1/namespaces/watch-1447/configmaps/e2e-watch-test-label-changed,UID:49b24783-96a0-11e9-b70d-fa163ef83c94,ResourceVersion:16589,Generation:0,CreationTimestamp:2019-06-24 16:51:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} +[AfterEach] [sig-api-machinery] Watchers + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:51:28.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "watch-1447" for this suite. +Jun 24 16:51:34.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:51:34.838: INFO: namespace watch-1447 deletion completed in 6.09732638s + +• [SLOW TEST:16.195 seconds] +[sig-api-machinery] Watchers +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22 + should observe an object deletion if it stops meeting the requirements of the selector [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[sig-scheduling] SchedulerPredicates [Serial] + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:51:34.842: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename sched-pred +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 +Jun 24 16:51:34.883: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready +Jun 24 16:51:34.890: INFO: Waiting for terminating namespaces to be deleted... +Jun 24 16:51:34.895: INFO: +Logging pods the kubelet thinks is on node minion before test +Jun 24 16:51:34.904: INFO: nodelocaldns-vmsgk from kube-system started at 2019-06-24 15:30:09 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container node-cache ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: nginx-proxy-minion from kube-system started at (0 container statuses recorded) +Jun 24 16:51:34.904: INFO: coredns-97c4b444f-9954l from kube-system started at 2019-06-24 15:30:06 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container coredns ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-24 15:31:39 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container kube-sonobuoy ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: weave-net-p4t4q from kube-system started at 2019-06-24 15:29:30 +0000 UTC (2 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container weave ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: Container weave-npc ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: kubernetes-dashboard-6c7466966c-v95zd from kube-system started at 2019-06-24 15:30:10 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container kubernetes-dashboard ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: weave-scope-app-5bcb7f46b9-pv6gl from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container app ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: weave-scope-agent-mmtsr from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container agent ready: true, restart count 0 +Jun 24 16:51:34.904: INFO: sonobuoy-systemd-logs-daemon-set-7e1461ca4731443f-8ql79 from heptio-sonobuoy started at 2019-06-24 15:31:43 +0000 UTC (2 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container sonobuoy-systemd-logs-config ready: true, restart count 1 +Jun 24 16:51:34.904: INFO: Container sonobuoy-worker ready: true, restart count 1 +Jun 24 16:51:34.904: INFO: kube-proxy-d8w54 from kube-system started at 2019-06-24 15:29:46 +0000 UTC (1 container statuses recorded) +Jun 24 16:51:34.904: INFO: Container kube-proxy ready: true, restart count 0 +[It] validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Trying to launch a pod without a label to get a node which can launch it. +STEP: Explicitly delete pod here to free the resource it takes. +STEP: Trying to apply a random label on the found node. +STEP: verifying the node has the label kubernetes.io/e2e-549262e3-96a0-11e9-8bcb-526dc0a539dd 42 +STEP: Trying to relaunch the pod, now with labels. +STEP: removing the label kubernetes.io/e2e-549262e3-96a0-11e9-8bcb-526dc0a539dd off the node minion +STEP: verifying the node doesn't have the label kubernetes.io/e2e-549262e3-96a0-11e9-8bcb-526dc0a539dd +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:51:38.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "sched-pred-7924" for this suite. +Jun 24 16:52:07.009: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:52:07.103: INFO: namespace sched-pred-7924 deletion completed in 28.105378008s +[AfterEach] [sig-scheduling] SchedulerPredicates [Serial] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70 + +• [SLOW TEST:32.262 seconds] +[sig-scheduling] SchedulerPredicates [Serial] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22 + validates that NodeSelector is respected if matching [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:52:07.105: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[BeforeEach] when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 +[It] should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:52:07.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-1734" for this suite. +Jun 24 16:52:13.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:52:13.271: INFO: namespace kubelet-test-1734 deletion completed in 6.099178016s + +• [SLOW TEST:6.166 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command that always fails in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 + should be possible to delete [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSSSSSSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:52:13.272: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 24 16:54:55.375: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:54:55.382: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:54:57.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:54:57.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:54:59.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:54:59.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:01.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:01.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:03.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:03.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:05.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:05.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:07.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:07.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:09.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:09.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:11.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:11.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:13.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:13.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:15.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:15.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:17.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:17.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:19.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:19.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:21.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:21.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:23.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:23.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:25.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:25.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:27.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:27.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:29.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:29.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:31.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:31.390: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:33.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:33.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:35.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:35.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:37.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:37.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:39.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:39.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:41.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:41.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:43.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:43.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:45.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:45.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:47.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:47.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:49.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:49.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:51.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:51.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:53.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:53.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:55.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:55.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:57.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:57.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:55:59.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:55:59.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:01.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:01.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:03.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:03.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:05.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:05.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:07.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:07.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:09.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:09.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:11.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:11.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:13.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:13.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:15.383: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:15.389: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:17.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:17.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:19.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:19.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:21.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:21.385: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:23.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:23.386: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:25.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:25.391: INFO: Pod pod-with-poststart-exec-hook still exists +Jun 24 16:56:27.382: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear +Jun 24 16:56:27.386: INFO: Pod pod-with-poststart-exec-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:56:27.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-4101" for this suite. +Jun 24 16:56:49.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:56:49.498: INFO: namespace container-lifecycle-hook-4101 deletion completed in 22.107969393s + +• [SLOW TEST:276.226 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart exec hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-cli] Kubectl client [k8s.io] Kubectl api-versions + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:56:49.498: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubectl +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213 +[It] should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: validating api versions +Jun 24 16:56:49.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 api-versions' +Jun 24 16:56:49.657: INFO: stderr: "" +Jun 24 16:56:49.657: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" +[AfterEach] [sig-cli] Kubectl client + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:56:49.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubectl-4463" for this suite. +Jun 24 16:56:55.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:56:55.759: INFO: namespace kubectl-4463 deletion completed in 6.097441043s + +• [SLOW TEST:6.260 seconds] +[sig-cli] Kubectl client +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 + [k8s.io] Kubectl api-versions + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should check if v1 is in available api versions [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSS +------------------------------ +[k8s.io] Kubelet when scheduling a busybox command in a pod + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:56:55.759: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename kubelet-test +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 +[It] should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[AfterEach] [k8s.io] Kubelet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:56:57.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "kubelet-test-4017" for this suite. +Jun 24 16:57:35.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:57:35.923: INFO: namespace kubelet-test-4017 deletion completed in 38.102212095s + +• [SLOW TEST:40.164 seconds] +[k8s.io] Kubelet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when scheduling a busybox command in a pod + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 + should print the output to logs [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSS +------------------------------ +[sig-network] Service endpoints latency + should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:57:35.923: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename svc-latency +STEP: Waiting for a default service account to be provisioned in namespace +[It] should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating replication controller svc-latency-rc in namespace svc-latency-2752 +I0624 16:57:35.959550 20 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2752, replica count: 1 +I0624 16:57:37.010361 20 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +I0624 16:57:38.010771 20 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady +Jun 24 16:57:38.125: INFO: Created: latency-svc-mk2d6 +Jun 24 16:57:38.134: INFO: Got endpoints: latency-svc-mk2d6 [23.808241ms] +Jun 24 16:57:38.151: INFO: Created: latency-svc-2dwjw +Jun 24 16:57:38.171: INFO: Got endpoints: latency-svc-2dwjw [36.167015ms] +Jun 24 16:57:38.172: INFO: Created: latency-svc-pgdfx +Jun 24 16:57:38.175: INFO: Got endpoints: latency-svc-pgdfx [40.71996ms] +Jun 24 16:57:38.184: INFO: Created: latency-svc-2bkpq +Jun 24 16:57:38.186: INFO: Got endpoints: latency-svc-2bkpq [51.229851ms] +Jun 24 16:57:38.196: INFO: Created: latency-svc-vkmsk +Jun 24 16:57:38.205: INFO: Got endpoints: latency-svc-vkmsk [69.609021ms] +Jun 24 16:57:38.212: INFO: Created: latency-svc-9vqxk +Jun 24 16:57:38.215: INFO: Got endpoints: latency-svc-9vqxk [79.68785ms] +Jun 24 16:57:38.222: INFO: Created: latency-svc-52wlg +Jun 24 16:57:38.226: INFO: Got endpoints: latency-svc-52wlg [90.479407ms] +Jun 24 16:57:38.235: INFO: Created: latency-svc-n8b47 +Jun 24 16:57:38.249: INFO: Created: latency-svc-47xkd +Jun 24 16:57:38.249: INFO: Got endpoints: latency-svc-n8b47 [114.143197ms] +Jun 24 16:57:38.258: INFO: Got endpoints: latency-svc-47xkd [122.878474ms] +Jun 24 16:57:38.267: INFO: Created: latency-svc-6mnhn +Jun 24 16:57:38.287: INFO: Got endpoints: latency-svc-6mnhn [151.789464ms] +Jun 24 16:57:38.291: INFO: Created: latency-svc-qmrc5 +Jun 24 16:57:38.294: INFO: Got endpoints: latency-svc-qmrc5 [158.102837ms] +Jun 24 16:57:38.303: INFO: Created: latency-svc-rnwsm +Jun 24 16:57:38.307: INFO: Got endpoints: latency-svc-rnwsm [171.683707ms] +Jun 24 16:57:38.315: INFO: Created: latency-svc-xgrkm +Jun 24 16:57:38.319: INFO: Got endpoints: latency-svc-xgrkm [183.018394ms] +Jun 24 16:57:38.328: INFO: Created: latency-svc-s6mkw +Jun 24 16:57:38.336: INFO: Got endpoints: latency-svc-s6mkw [200.913751ms] +Jun 24 16:57:38.336: INFO: Created: latency-svc-fq6l2 +Jun 24 16:57:38.345: INFO: Created: latency-svc-bddg9 +Jun 24 16:57:38.346: INFO: Got endpoints: latency-svc-fq6l2 [210.38973ms] +Jun 24 16:57:38.356: INFO: Got endpoints: latency-svc-bddg9 [220.316373ms] +Jun 24 16:57:38.357: INFO: Created: latency-svc-6mtc4 +Jun 24 16:57:38.363: INFO: Got endpoints: latency-svc-6mtc4 [27.037222ms] +Jun 24 16:57:38.372: INFO: Created: latency-svc-8jfvm +Jun 24 16:57:38.376: INFO: Got endpoints: latency-svc-8jfvm [204.888362ms] +Jun 24 16:57:38.383: INFO: Created: latency-svc-4r7tb +Jun 24 16:57:38.405: INFO: Created: latency-svc-9wcmk +Jun 24 16:57:38.405: INFO: Got endpoints: latency-svc-4r7tb [230.101095ms] +Jun 24 16:57:38.411: INFO: Got endpoints: latency-svc-9wcmk [224.556333ms] +Jun 24 16:57:38.420: INFO: Created: latency-svc-8szrq +Jun 24 16:57:38.420: INFO: Got endpoints: latency-svc-8szrq [215.035527ms] +Jun 24 16:57:38.431: INFO: Created: latency-svc-qtfb2 +Jun 24 16:57:38.438: INFO: Got endpoints: latency-svc-qtfb2 [223.483128ms] +Jun 24 16:57:38.438: INFO: Created: latency-svc-zkrng +Jun 24 16:57:38.449: INFO: Created: latency-svc-svhgb +Jun 24 16:57:38.449: INFO: Got endpoints: latency-svc-zkrng [223.014835ms] +Jun 24 16:57:38.455: INFO: Got endpoints: latency-svc-svhgb [205.287399ms] +Jun 24 16:57:38.465: INFO: Created: latency-svc-vlppg +Jun 24 16:57:38.469: INFO: Got endpoints: latency-svc-vlppg [210.973223ms] +Jun 24 16:57:38.476: INFO: Created: latency-svc-cx746 +Jun 24 16:57:38.486: INFO: Got endpoints: latency-svc-cx746 [198.803246ms] +Jun 24 16:57:38.487: INFO: Created: latency-svc-89bc6 +Jun 24 16:57:38.494: INFO: Got endpoints: latency-svc-89bc6 [200.524609ms] +Jun 24 16:57:38.496: INFO: Created: latency-svc-7bfpd +Jun 24 16:57:38.517: INFO: Got endpoints: latency-svc-7bfpd [210.044688ms] +Jun 24 16:57:38.519: INFO: Created: latency-svc-2q8m6 +Jun 24 16:57:38.523: INFO: Got endpoints: latency-svc-2q8m6 [204.062494ms] +Jun 24 16:57:38.533: INFO: Created: latency-svc-468r7 +Jun 24 16:57:38.537: INFO: Got endpoints: latency-svc-468r7 [190.861551ms] +Jun 24 16:57:38.546: INFO: Created: latency-svc-blcd6 +Jun 24 16:57:38.549: INFO: Got endpoints: latency-svc-blcd6 [192.890102ms] +Jun 24 16:57:38.557: INFO: Created: latency-svc-xdnqz +Jun 24 16:57:38.566: INFO: Got endpoints: latency-svc-xdnqz [203.052252ms] +Jun 24 16:57:38.567: INFO: Created: latency-svc-g9jsb +Jun 24 16:57:38.577: INFO: Created: latency-svc-954zj +Jun 24 16:57:38.577: INFO: Got endpoints: latency-svc-g9jsb [201.57824ms] +Jun 24 16:57:38.584: INFO: Got endpoints: latency-svc-954zj [178.021778ms] +Jun 24 16:57:38.592: INFO: Created: latency-svc-bb42w +Jun 24 16:57:38.603: INFO: Got endpoints: latency-svc-bb42w [191.995643ms] +Jun 24 16:57:38.604: INFO: Created: latency-svc-xlc79 +Jun 24 16:57:38.613: INFO: Got endpoints: latency-svc-xlc79 [193.55988ms] +Jun 24 16:57:38.613: INFO: Created: latency-svc-pbq77 +Jun 24 16:57:38.616: INFO: Got endpoints: latency-svc-pbq77 [177.159957ms] +Jun 24 16:57:38.630: INFO: Created: latency-svc-drlw2 +Jun 24 16:57:38.638: INFO: Got endpoints: latency-svc-drlw2 [188.737593ms] +Jun 24 16:57:38.639: INFO: Created: latency-svc-z8srx +Jun 24 16:57:38.650: INFO: Got endpoints: latency-svc-z8srx [195.118472ms] +Jun 24 16:57:38.663: INFO: Created: latency-svc-6q9vl +Jun 24 16:57:38.666: INFO: Got endpoints: latency-svc-6q9vl [197.30379ms] +Jun 24 16:57:38.673: INFO: Created: latency-svc-ch6p6 +Jun 24 16:57:38.683: INFO: Created: latency-svc-bvmms +Jun 24 16:57:38.683: INFO: Got endpoints: latency-svc-ch6p6 [196.998771ms] +Jun 24 16:57:38.692: INFO: Created: latency-svc-pmpqn +Jun 24 16:57:38.702: INFO: Created: latency-svc-8c8zx +Jun 24 16:57:38.716: INFO: Created: latency-svc-hvgpb +Jun 24 16:57:38.729: INFO: Created: latency-svc-2wkqz +Jun 24 16:57:38.750: INFO: Created: latency-svc-fx2jx +Jun 24 16:57:38.751: INFO: Got endpoints: latency-svc-bvmms [256.554382ms] +Jun 24 16:57:38.763: INFO: Created: latency-svc-lc2s5 +Jun 24 16:57:38.776: INFO: Created: latency-svc-trx7t +Jun 24 16:57:38.780: INFO: Got endpoints: latency-svc-pmpqn [262.504549ms] +Jun 24 16:57:38.789: INFO: Created: latency-svc-kttz9 +Jun 24 16:57:38.800: INFO: Created: latency-svc-gf6hh +Jun 24 16:57:38.811: INFO: Created: latency-svc-rm5v4 +Jun 24 16:57:38.822: INFO: Created: latency-svc-9tzv6 +Jun 24 16:57:38.835: INFO: Got endpoints: latency-svc-8c8zx [312.723174ms] +Jun 24 16:57:38.836: INFO: Created: latency-svc-pst4q +Jun 24 16:57:38.847: INFO: Created: latency-svc-jbvn2 +Jun 24 16:57:38.862: INFO: Created: latency-svc-mfg69 +Jun 24 16:57:38.880: INFO: Created: latency-svc-j9q9c +Jun 24 16:57:38.880: INFO: Got endpoints: latency-svc-hvgpb [343.062503ms] +Jun 24 16:57:38.894: INFO: Created: latency-svc-g7szk +Jun 24 16:57:38.909: INFO: Created: latency-svc-2k222 +Jun 24 16:57:38.918: INFO: Created: latency-svc-7krkx +Jun 24 16:57:38.929: INFO: Got endpoints: latency-svc-2wkqz [379.660147ms] +Jun 24 16:57:38.949: INFO: Created: latency-svc-r6fcs +Jun 24 16:57:38.980: INFO: Got endpoints: latency-svc-fx2jx [413.184654ms] +Jun 24 16:57:38.997: INFO: Created: latency-svc-g9l5h +Jun 24 16:57:39.029: INFO: Got endpoints: latency-svc-lc2s5 [451.172512ms] +Jun 24 16:57:39.042: INFO: Created: latency-svc-pgcll +Jun 24 16:57:39.080: INFO: Got endpoints: latency-svc-trx7t [496.41331ms] +Jun 24 16:57:39.094: INFO: Created: latency-svc-vj7b9 +Jun 24 16:57:39.129: INFO: Got endpoints: latency-svc-kttz9 [526.212342ms] +Jun 24 16:57:39.150: INFO: Created: latency-svc-l9pzz +Jun 24 16:57:39.179: INFO: Got endpoints: latency-svc-gf6hh [565.867351ms] +Jun 24 16:57:39.201: INFO: Created: latency-svc-rvb72 +Jun 24 16:57:39.229: INFO: Got endpoints: latency-svc-rm5v4 [613.046004ms] +Jun 24 16:57:39.242: INFO: Created: latency-svc-cr5xf +Jun 24 16:57:39.279: INFO: Got endpoints: latency-svc-9tzv6 [641.286082ms] +Jun 24 16:57:39.302: INFO: Created: latency-svc-d7zsp +Jun 24 16:57:39.329: INFO: Got endpoints: latency-svc-pst4q [679.038766ms] +Jun 24 16:57:39.348: INFO: Created: latency-svc-h2rjb +Jun 24 16:57:39.380: INFO: Got endpoints: latency-svc-jbvn2 [714.039867ms] +Jun 24 16:57:39.394: INFO: Created: latency-svc-scmkt +Jun 24 16:57:39.429: INFO: Got endpoints: latency-svc-mfg69 [746.069894ms] +Jun 24 16:57:39.443: INFO: Created: latency-svc-xmhsx +Jun 24 16:57:39.479: INFO: Got endpoints: latency-svc-j9q9c [727.420503ms] +Jun 24 16:57:39.495: INFO: Created: latency-svc-nx9zv +Jun 24 16:57:39.528: INFO: Got endpoints: latency-svc-g7szk [748.725054ms] +Jun 24 16:57:39.544: INFO: Created: latency-svc-chpgs +Jun 24 16:57:39.581: INFO: Got endpoints: latency-svc-2k222 [745.816206ms] +Jun 24 16:57:39.595: INFO: Created: latency-svc-bl4b7 +Jun 24 16:57:39.637: INFO: Got endpoints: latency-svc-7krkx [756.778837ms] +Jun 24 16:57:39.650: INFO: Created: latency-svc-42xwg +Jun 24 16:57:39.679: INFO: Got endpoints: latency-svc-r6fcs [750.140357ms] +Jun 24 16:57:39.696: INFO: Created: latency-svc-nhxvb +Jun 24 16:57:39.729: INFO: Got endpoints: latency-svc-g9l5h [749.015552ms] +Jun 24 16:57:39.750: INFO: Created: latency-svc-gmh8s +Jun 24 16:57:39.781: INFO: Got endpoints: latency-svc-pgcll [751.883534ms] +Jun 24 16:57:39.797: INFO: Created: latency-svc-tgxgq +Jun 24 16:57:39.829: INFO: Got endpoints: latency-svc-vj7b9 [749.226226ms] +Jun 24 16:57:39.843: INFO: Created: latency-svc-8vss6 +Jun 24 16:57:39.879: INFO: Got endpoints: latency-svc-l9pzz [750.270623ms] +Jun 24 16:57:39.894: INFO: Created: latency-svc-q7ps6 +Jun 24 16:57:39.930: INFO: Got endpoints: latency-svc-rvb72 [750.253538ms] +Jun 24 16:57:39.943: INFO: Created: latency-svc-shgql +Jun 24 16:57:39.980: INFO: Got endpoints: latency-svc-cr5xf [750.850297ms] +Jun 24 16:57:39.993: INFO: Created: latency-svc-q8thr +Jun 24 16:57:40.029: INFO: Got endpoints: latency-svc-d7zsp [750.222736ms] +Jun 24 16:57:40.043: INFO: Created: latency-svc-n88pk +Jun 24 16:57:40.093: INFO: Got endpoints: latency-svc-h2rjb [763.91131ms] +Jun 24 16:57:40.109: INFO: Created: latency-svc-s6tb9 +Jun 24 16:57:40.133: INFO: Got endpoints: latency-svc-scmkt [752.970215ms] +Jun 24 16:57:40.146: INFO: Created: latency-svc-qbbpj +Jun 24 16:57:40.180: INFO: Got endpoints: latency-svc-xmhsx [750.873297ms] +Jun 24 16:57:40.207: INFO: Created: latency-svc-fp2lp +Jun 24 16:57:40.229: INFO: Got endpoints: latency-svc-nx9zv [749.8353ms] +Jun 24 16:57:40.242: INFO: Created: latency-svc-swv6q +Jun 24 16:57:40.280: INFO: Got endpoints: latency-svc-chpgs [751.551937ms] +Jun 24 16:57:40.293: INFO: Created: latency-svc-4n24v +Jun 24 16:57:40.330: INFO: Got endpoints: latency-svc-bl4b7 [749.116889ms] +Jun 24 16:57:40.343: INFO: Created: latency-svc-sghdp +Jun 24 16:57:40.379: INFO: Got endpoints: latency-svc-42xwg [742.49679ms] +Jun 24 16:57:40.393: INFO: Created: latency-svc-sp5j9 +Jun 24 16:57:40.432: INFO: Got endpoints: latency-svc-nhxvb [752.930575ms] +Jun 24 16:57:40.453: INFO: Created: latency-svc-bl8k9 +Jun 24 16:57:40.480: INFO: Got endpoints: latency-svc-gmh8s [750.676315ms] +Jun 24 16:57:40.491: INFO: Created: latency-svc-9f99n +Jun 24 16:57:40.529: INFO: Got endpoints: latency-svc-tgxgq [748.314298ms] +Jun 24 16:57:40.553: INFO: Created: latency-svc-v72dg +Jun 24 16:57:40.580: INFO: Got endpoints: latency-svc-8vss6 [750.751627ms] +Jun 24 16:57:40.595: INFO: Created: latency-svc-s76xs +Jun 24 16:57:40.630: INFO: Got endpoints: latency-svc-q7ps6 [750.029382ms] +Jun 24 16:57:40.642: INFO: Created: latency-svc-77cx7 +Jun 24 16:57:40.679: INFO: Got endpoints: latency-svc-shgql [749.229188ms] +Jun 24 16:57:40.695: INFO: Created: latency-svc-hrzvz +Jun 24 16:57:40.729: INFO: Got endpoints: latency-svc-q8thr [749.722621ms] +Jun 24 16:57:40.744: INFO: Created: latency-svc-tvqjg +Jun 24 16:57:40.779: INFO: Got endpoints: latency-svc-n88pk [749.348425ms] +Jun 24 16:57:40.796: INFO: Created: latency-svc-wshlj +Jun 24 16:57:40.829: INFO: Got endpoints: latency-svc-s6tb9 [736.281292ms] +Jun 24 16:57:40.842: INFO: Created: latency-svc-gwj68 +Jun 24 16:57:40.879: INFO: Got endpoints: latency-svc-qbbpj [745.620729ms] +Jun 24 16:57:40.893: INFO: Created: latency-svc-8kc26 +Jun 24 16:57:40.929: INFO: Got endpoints: latency-svc-fp2lp [749.010046ms] +Jun 24 16:57:40.945: INFO: Created: latency-svc-cr45l +Jun 24 16:57:40.987: INFO: Got endpoints: latency-svc-swv6q [757.582475ms] +Jun 24 16:57:41.000: INFO: Created: latency-svc-rt4lm +Jun 24 16:57:41.029: INFO: Got endpoints: latency-svc-4n24v [748.942502ms] +Jun 24 16:57:41.041: INFO: Created: latency-svc-kvsp5 +Jun 24 16:57:41.080: INFO: Got endpoints: latency-svc-sghdp [749.120621ms] +Jun 24 16:57:41.108: INFO: Created: latency-svc-4jfd2 +Jun 24 16:57:41.131: INFO: Got endpoints: latency-svc-sp5j9 [751.806126ms] +Jun 24 16:57:41.144: INFO: Created: latency-svc-jx6c2 +Jun 24 16:57:41.179: INFO: Got endpoints: latency-svc-bl8k9 [747.019481ms] +Jun 24 16:57:41.195: INFO: Created: latency-svc-kdptp +Jun 24 16:57:41.229: INFO: Got endpoints: latency-svc-9f99n [749.186715ms] +Jun 24 16:57:41.245: INFO: Created: latency-svc-94zwf +Jun 24 16:57:41.280: INFO: Got endpoints: latency-svc-v72dg [750.671725ms] +Jun 24 16:57:41.292: INFO: Created: latency-svc-zzrth +Jun 24 16:57:41.329: INFO: Got endpoints: latency-svc-s76xs [749.149287ms] +Jun 24 16:57:41.343: INFO: Created: latency-svc-52f8z +Jun 24 16:57:41.379: INFO: Got endpoints: latency-svc-77cx7 [748.932841ms] +Jun 24 16:57:41.396: INFO: Created: latency-svc-285cr +Jun 24 16:57:41.430: INFO: Got endpoints: latency-svc-hrzvz [750.632043ms] +Jun 24 16:57:41.443: INFO: Created: latency-svc-ngwpx +Jun 24 16:57:41.480: INFO: Got endpoints: latency-svc-tvqjg [750.105102ms] +Jun 24 16:57:41.492: INFO: Created: latency-svc-dw9w5 +Jun 24 16:57:41.529: INFO: Got endpoints: latency-svc-wshlj [750.065484ms] +Jun 24 16:57:41.549: INFO: Created: latency-svc-dzjf5 +Jun 24 16:57:41.579: INFO: Got endpoints: latency-svc-gwj68 [750.091557ms] +Jun 24 16:57:41.593: INFO: Created: latency-svc-nlcr9 +Jun 24 16:57:41.629: INFO: Got endpoints: latency-svc-8kc26 [749.5169ms] +Jun 24 16:57:41.656: INFO: Created: latency-svc-k58nd +Jun 24 16:57:41.679: INFO: Got endpoints: latency-svc-cr45l [750.099475ms] +Jun 24 16:57:41.694: INFO: Created: latency-svc-ftlcw +Jun 24 16:57:41.729: INFO: Got endpoints: latency-svc-rt4lm [742.149023ms] +Jun 24 16:57:41.743: INFO: Created: latency-svc-2m2g2 +Jun 24 16:57:41.779: INFO: Got endpoints: latency-svc-kvsp5 [749.676379ms] +Jun 24 16:57:41.792: INFO: Created: latency-svc-s4clv +Jun 24 16:57:41.828: INFO: Got endpoints: latency-svc-4jfd2 [748.694032ms] +Jun 24 16:57:41.841: INFO: Created: latency-svc-hzxxg +Jun 24 16:57:41.880: INFO: Got endpoints: latency-svc-jx6c2 [748.131734ms] +Jun 24 16:57:41.892: INFO: Created: latency-svc-xzhbc +Jun 24 16:57:41.929: INFO: Got endpoints: latency-svc-kdptp [749.768798ms] +Jun 24 16:57:41.943: INFO: Created: latency-svc-qk6mk +Jun 24 16:57:41.986: INFO: Got endpoints: latency-svc-94zwf [757.305359ms] +Jun 24 16:57:41.998: INFO: Created: latency-svc-4n86w +Jun 24 16:57:42.029: INFO: Got endpoints: latency-svc-zzrth [749.708045ms] +Jun 24 16:57:42.043: INFO: Created: latency-svc-jtjsc +Jun 24 16:57:42.081: INFO: Got endpoints: latency-svc-52f8z [751.126508ms] +Jun 24 16:57:42.103: INFO: Created: latency-svc-vffth +Jun 24 16:57:42.130: INFO: Got endpoints: latency-svc-285cr [751.12371ms] +Jun 24 16:57:42.142: INFO: Created: latency-svc-vtmbw +Jun 24 16:57:42.179: INFO: Got endpoints: latency-svc-ngwpx [749.771161ms] +Jun 24 16:57:42.193: INFO: Created: latency-svc-fnbk2 +Jun 24 16:57:42.230: INFO: Got endpoints: latency-svc-dw9w5 [750.103575ms] +Jun 24 16:57:42.243: INFO: Created: latency-svc-9c6fc +Jun 24 16:57:42.279: INFO: Got endpoints: latency-svc-dzjf5 [750.412896ms] +Jun 24 16:57:42.292: INFO: Created: latency-svc-6bd5c +Jun 24 16:57:42.329: INFO: Got endpoints: latency-svc-nlcr9 [749.839669ms] +Jun 24 16:57:42.342: INFO: Created: latency-svc-vz7lt +Jun 24 16:57:42.380: INFO: Got endpoints: latency-svc-k58nd [750.426042ms] +Jun 24 16:57:42.392: INFO: Created: latency-svc-zrd49 +Jun 24 16:57:42.437: INFO: Got endpoints: latency-svc-ftlcw [758.067853ms] +Jun 24 16:57:42.455: INFO: Created: latency-svc-s8vq2 +Jun 24 16:57:42.479: INFO: Got endpoints: latency-svc-2m2g2 [750.200179ms] +Jun 24 16:57:42.492: INFO: Created: latency-svc-ktqnq +Jun 24 16:57:42.529: INFO: Got endpoints: latency-svc-s4clv [750.409649ms] +Jun 24 16:57:42.550: INFO: Created: latency-svc-pm9br +Jun 24 16:57:42.579: INFO: Got endpoints: latency-svc-hzxxg [750.883649ms] +Jun 24 16:57:42.596: INFO: Created: latency-svc-gzgbw +Jun 24 16:57:42.629: INFO: Got endpoints: latency-svc-xzhbc [749.459691ms] +Jun 24 16:57:42.642: INFO: Created: latency-svc-j95bv +Jun 24 16:57:42.679: INFO: Got endpoints: latency-svc-qk6mk [750.352013ms] +Jun 24 16:57:42.693: INFO: Created: latency-svc-mzwn2 +Jun 24 16:57:42.729: INFO: Got endpoints: latency-svc-4n86w [743.098154ms] +Jun 24 16:57:42.746: INFO: Created: latency-svc-bsvcf +Jun 24 16:57:42.779: INFO: Got endpoints: latency-svc-jtjsc [749.229857ms] +Jun 24 16:57:42.794: INFO: Created: latency-svc-cpnf6 +Jun 24 16:57:42.829: INFO: Got endpoints: latency-svc-vffth [748.466563ms] +Jun 24 16:57:42.846: INFO: Created: latency-svc-h6c6l +Jun 24 16:57:42.879: INFO: Got endpoints: latency-svc-vtmbw [748.720645ms] +Jun 24 16:57:42.896: INFO: Created: latency-svc-ffw8t +Jun 24 16:57:42.930: INFO: Got endpoints: latency-svc-fnbk2 [750.541098ms] +Jun 24 16:57:42.944: INFO: Created: latency-svc-x4kmr +Jun 24 16:57:42.989: INFO: Got endpoints: latency-svc-9c6fc [758.777454ms] +Jun 24 16:57:43.003: INFO: Created: latency-svc-8d8rr +Jun 24 16:57:43.031: INFO: Got endpoints: latency-svc-6bd5c [751.439771ms] +Jun 24 16:57:43.044: INFO: Created: latency-svc-cnh4r +Jun 24 16:57:43.079: INFO: Got endpoints: latency-svc-vz7lt [749.903831ms] +Jun 24 16:57:43.100: INFO: Created: latency-svc-tphk8 +Jun 24 16:57:43.129: INFO: Got endpoints: latency-svc-zrd49 [749.904335ms] +Jun 24 16:57:43.143: INFO: Created: latency-svc-7bvpl +Jun 24 16:57:43.179: INFO: Got endpoints: latency-svc-s8vq2 [741.951487ms] +Jun 24 16:57:43.193: INFO: Created: latency-svc-j74qw +Jun 24 16:57:43.228: INFO: Got endpoints: latency-svc-ktqnq [749.074002ms] +Jun 24 16:57:43.245: INFO: Created: latency-svc-2lk4l +Jun 24 16:57:43.279: INFO: Got endpoints: latency-svc-pm9br [749.726883ms] +Jun 24 16:57:43.298: INFO: Created: latency-svc-cs89s +Jun 24 16:57:43.331: INFO: Got endpoints: latency-svc-gzgbw [751.228794ms] +Jun 24 16:57:43.346: INFO: Created: latency-svc-dzlj2 +Jun 24 16:57:43.379: INFO: Got endpoints: latency-svc-j95bv [749.678604ms] +Jun 24 16:57:43.397: INFO: Created: latency-svc-r5lww +Jun 24 16:57:43.429: INFO: Got endpoints: latency-svc-mzwn2 [749.252642ms] +Jun 24 16:57:43.450: INFO: Created: latency-svc-qkqmj +Jun 24 16:57:43.479: INFO: Got endpoints: latency-svc-bsvcf [750.016908ms] +Jun 24 16:57:43.493: INFO: Created: latency-svc-2xv55 +Jun 24 16:57:43.528: INFO: Got endpoints: latency-svc-cpnf6 [749.600661ms] +Jun 24 16:57:43.553: INFO: Created: latency-svc-snvpg +Jun 24 16:57:43.580: INFO: Got endpoints: latency-svc-h6c6l [750.236466ms] +Jun 24 16:57:43.600: INFO: Created: latency-svc-5k7v5 +Jun 24 16:57:43.630: INFO: Got endpoints: latency-svc-ffw8t [750.997699ms] +Jun 24 16:57:43.643: INFO: Created: latency-svc-w96fb +Jun 24 16:57:43.679: INFO: Got endpoints: latency-svc-x4kmr [748.989779ms] +Jun 24 16:57:43.694: INFO: Created: latency-svc-n2md7 +Jun 24 16:57:43.729: INFO: Got endpoints: latency-svc-8d8rr [740.275446ms] +Jun 24 16:57:43.743: INFO: Created: latency-svc-fzbrl +Jun 24 16:57:43.779: INFO: Got endpoints: latency-svc-cnh4r [748.359243ms] +Jun 24 16:57:43.803: INFO: Created: latency-svc-zhvv8 +Jun 24 16:57:43.829: INFO: Got endpoints: latency-svc-tphk8 [749.510903ms] +Jun 24 16:57:43.843: INFO: Created: latency-svc-pdnr6 +Jun 24 16:57:43.888: INFO: Got endpoints: latency-svc-7bvpl [758.262435ms] +Jun 24 16:57:43.902: INFO: Created: latency-svc-xd7wx +Jun 24 16:57:43.929: INFO: Got endpoints: latency-svc-j74qw [749.554574ms] +Jun 24 16:57:43.950: INFO: Created: latency-svc-nt6hf +Jun 24 16:57:43.980: INFO: Got endpoints: latency-svc-2lk4l [751.487249ms] +Jun 24 16:57:44.007: INFO: Created: latency-svc-h95m8 +Jun 24 16:57:44.030: INFO: Got endpoints: latency-svc-cs89s [750.645994ms] +Jun 24 16:57:44.042: INFO: Created: latency-svc-mhvrw +Jun 24 16:57:44.079: INFO: Got endpoints: latency-svc-dzlj2 [748.293365ms] +Jun 24 16:57:44.092: INFO: Created: latency-svc-gf7wz +Jun 24 16:57:44.130: INFO: Got endpoints: latency-svc-r5lww [751.275264ms] +Jun 24 16:57:44.143: INFO: Created: latency-svc-xqttr +Jun 24 16:57:44.180: INFO: Got endpoints: latency-svc-qkqmj [750.87391ms] +Jun 24 16:57:44.193: INFO: Created: latency-svc-dzc6m +Jun 24 16:57:44.232: INFO: Got endpoints: latency-svc-2xv55 [752.454319ms] +Jun 24 16:57:44.247: INFO: Created: latency-svc-5bnt7 +Jun 24 16:57:44.279: INFO: Got endpoints: latency-svc-snvpg [750.774491ms] +Jun 24 16:57:44.297: INFO: Created: latency-svc-2kwj9 +Jun 24 16:57:44.329: INFO: Got endpoints: latency-svc-5k7v5 [749.427617ms] +Jun 24 16:57:44.356: INFO: Created: latency-svc-m7snb +Jun 24 16:57:44.379: INFO: Got endpoints: latency-svc-w96fb [749.551917ms] +Jun 24 16:57:44.394: INFO: Created: latency-svc-x96bj +Jun 24 16:57:44.431: INFO: Got endpoints: latency-svc-n2md7 [751.11685ms] +Jun 24 16:57:44.459: INFO: Created: latency-svc-4b79s +Jun 24 16:57:44.479: INFO: Got endpoints: latency-svc-fzbrl [749.74582ms] +Jun 24 16:57:44.492: INFO: Created: latency-svc-nl25p +Jun 24 16:57:44.529: INFO: Got endpoints: latency-svc-zhvv8 [750.033539ms] +Jun 24 16:57:44.542: INFO: Created: latency-svc-9hd5s +Jun 24 16:57:44.579: INFO: Got endpoints: latency-svc-pdnr6 [749.877065ms] +Jun 24 16:57:44.592: INFO: Created: latency-svc-5vsng +Jun 24 16:57:44.630: INFO: Got endpoints: latency-svc-xd7wx [741.767387ms] +Jun 24 16:57:44.647: INFO: Created: latency-svc-hjwjv +Jun 24 16:57:44.684: INFO: Got endpoints: latency-svc-nt6hf [754.528312ms] +Jun 24 16:57:44.696: INFO: Created: latency-svc-hw92h +Jun 24 16:57:44.729: INFO: Got endpoints: latency-svc-h95m8 [749.114467ms] +Jun 24 16:57:44.752: INFO: Created: latency-svc-vpx6q +Jun 24 16:57:44.780: INFO: Got endpoints: latency-svc-mhvrw [749.676026ms] +Jun 24 16:57:44.812: INFO: Created: latency-svc-x2kvr +Jun 24 16:57:44.830: INFO: Got endpoints: latency-svc-gf7wz [750.769783ms] +Jun 24 16:57:44.849: INFO: Created: latency-svc-n9ng6 +Jun 24 16:57:44.884: INFO: Got endpoints: latency-svc-xqttr [753.694126ms] +Jun 24 16:57:44.898: INFO: Created: latency-svc-c7fkn +Jun 24 16:57:44.928: INFO: Got endpoints: latency-svc-dzc6m [748.634053ms] +Jun 24 16:57:44.941: INFO: Created: latency-svc-l8d6h +Jun 24 16:57:44.979: INFO: Got endpoints: latency-svc-5bnt7 [747.109232ms] +Jun 24 16:57:44.992: INFO: Created: latency-svc-5v55z +Jun 24 16:57:45.029: INFO: Got endpoints: latency-svc-2kwj9 [749.858671ms] +Jun 24 16:57:45.053: INFO: Created: latency-svc-8qngg +Jun 24 16:57:45.080: INFO: Got endpoints: latency-svc-m7snb [750.375744ms] +Jun 24 16:57:45.098: INFO: Created: latency-svc-n562m +Jun 24 16:57:45.141: INFO: Got endpoints: latency-svc-x96bj [761.724743ms] +Jun 24 16:57:45.156: INFO: Created: latency-svc-2gkr9 +Jun 24 16:57:45.180: INFO: Got endpoints: latency-svc-4b79s [749.34548ms] +Jun 24 16:57:45.193: INFO: Created: latency-svc-sdpgd +Jun 24 16:57:45.229: INFO: Got endpoints: latency-svc-nl25p [750.182557ms] +Jun 24 16:57:45.262: INFO: Created: latency-svc-zlhsd +Jun 24 16:57:45.279: INFO: Got endpoints: latency-svc-9hd5s [749.583535ms] +Jun 24 16:57:45.292: INFO: Created: latency-svc-cmhjf +Jun 24 16:57:45.330: INFO: Got endpoints: latency-svc-5vsng [750.647073ms] +Jun 24 16:57:45.345: INFO: Created: latency-svc-rjq4p +Jun 24 16:57:45.380: INFO: Got endpoints: latency-svc-hjwjv [749.896091ms] +Jun 24 16:57:45.392: INFO: Created: latency-svc-j92kg +Jun 24 16:57:45.429: INFO: Got endpoints: latency-svc-hw92h [745.720692ms] +Jun 24 16:57:45.452: INFO: Created: latency-svc-dcls4 +Jun 24 16:57:45.501: INFO: Got endpoints: latency-svc-vpx6q [772.026806ms] +Jun 24 16:57:45.514: INFO: Created: latency-svc-jjbh6 +Jun 24 16:57:45.529: INFO: Got endpoints: latency-svc-x2kvr [749.384647ms] +Jun 24 16:57:45.543: INFO: Created: latency-svc-75g82 +Jun 24 16:57:45.580: INFO: Got endpoints: latency-svc-n9ng6 [749.705375ms] +Jun 24 16:57:45.596: INFO: Created: latency-svc-dz7rg +Jun 24 16:57:45.629: INFO: Got endpoints: latency-svc-c7fkn [744.921014ms] +Jun 24 16:57:45.647: INFO: Created: latency-svc-86nd9 +Jun 24 16:57:45.679: INFO: Got endpoints: latency-svc-l8d6h [750.748403ms] +Jun 24 16:57:45.699: INFO: Created: latency-svc-mvnwt +Jun 24 16:57:45.737: INFO: Got endpoints: latency-svc-5v55z [757.576531ms] +Jun 24 16:57:45.755: INFO: Created: latency-svc-qzdbm +Jun 24 16:57:45.779: INFO: Got endpoints: latency-svc-8qngg [749.991848ms] +Jun 24 16:57:45.794: INFO: Created: latency-svc-9t8wg +Jun 24 16:57:45.830: INFO: Got endpoints: latency-svc-n562m [749.957333ms] +Jun 24 16:57:45.855: INFO: Created: latency-svc-frm24 +Jun 24 16:57:45.879: INFO: Got endpoints: latency-svc-2gkr9 [737.363721ms] +Jun 24 16:57:45.900: INFO: Created: latency-svc-qlffp +Jun 24 16:57:45.929: INFO: Got endpoints: latency-svc-sdpgd [748.562776ms] +Jun 24 16:57:45.942: INFO: Created: latency-svc-gtvrg +Jun 24 16:57:45.979: INFO: Got endpoints: latency-svc-zlhsd [749.958903ms] +Jun 24 16:57:46.029: INFO: Got endpoints: latency-svc-cmhjf [750.411965ms] +Jun 24 16:57:46.088: INFO: Got endpoints: latency-svc-rjq4p [758.05995ms] +Jun 24 16:57:46.129: INFO: Got endpoints: latency-svc-j92kg [749.845469ms] +Jun 24 16:57:46.180: INFO: Got endpoints: latency-svc-dcls4 [750.556829ms] +Jun 24 16:57:46.230: INFO: Got endpoints: latency-svc-jjbh6 [728.844338ms] +Jun 24 16:57:46.280: INFO: Got endpoints: latency-svc-75g82 [751.436441ms] +Jun 24 16:57:46.329: INFO: Got endpoints: latency-svc-dz7rg [749.200796ms] +Jun 24 16:57:46.380: INFO: Got endpoints: latency-svc-86nd9 [750.722119ms] +Jun 24 16:57:46.429: INFO: Got endpoints: latency-svc-mvnwt [750.20386ms] +Jun 24 16:57:46.480: INFO: Got endpoints: latency-svc-qzdbm [743.103265ms] +Jun 24 16:57:46.530: INFO: Got endpoints: latency-svc-9t8wg [750.770515ms] +Jun 24 16:57:46.579: INFO: Got endpoints: latency-svc-frm24 [749.304703ms] +Jun 24 16:57:46.630: INFO: Got endpoints: latency-svc-qlffp [751.681845ms] +Jun 24 16:57:46.679: INFO: Got endpoints: latency-svc-gtvrg [750.56611ms] +Jun 24 16:57:46.681: INFO: Latencies: [27.037222ms 36.167015ms 40.71996ms 51.229851ms 69.609021ms 79.68785ms 90.479407ms 114.143197ms 122.878474ms 151.789464ms 158.102837ms 171.683707ms 177.159957ms 178.021778ms 183.018394ms 188.737593ms 190.861551ms 191.995643ms 192.890102ms 193.55988ms 195.118472ms 196.998771ms 197.30379ms 198.803246ms 200.524609ms 200.913751ms 201.57824ms 203.052252ms 204.062494ms 204.888362ms 205.287399ms 210.044688ms 210.38973ms 210.973223ms 215.035527ms 220.316373ms 223.014835ms 223.483128ms 224.556333ms 230.101095ms 256.554382ms 262.504549ms 312.723174ms 343.062503ms 379.660147ms 413.184654ms 451.172512ms 496.41331ms 526.212342ms 565.867351ms 613.046004ms 641.286082ms 679.038766ms 714.039867ms 727.420503ms 728.844338ms 736.281292ms 737.363721ms 740.275446ms 741.767387ms 741.951487ms 742.149023ms 742.49679ms 743.098154ms 743.103265ms 744.921014ms 745.620729ms 745.720692ms 745.816206ms 746.069894ms 747.019481ms 747.109232ms 748.131734ms 748.293365ms 748.314298ms 748.359243ms 748.466563ms 748.562776ms 748.634053ms 748.694032ms 748.720645ms 748.725054ms 748.932841ms 748.942502ms 748.989779ms 749.010046ms 749.015552ms 749.074002ms 749.114467ms 749.116889ms 749.120621ms 749.149287ms 749.186715ms 749.200796ms 749.226226ms 749.229188ms 749.229857ms 749.252642ms 749.304703ms 749.34548ms 749.348425ms 749.384647ms 749.427617ms 749.459691ms 749.510903ms 749.5169ms 749.551917ms 749.554574ms 749.583535ms 749.600661ms 749.676026ms 749.676379ms 749.678604ms 749.705375ms 749.708045ms 749.722621ms 749.726883ms 749.74582ms 749.768798ms 749.771161ms 749.8353ms 749.839669ms 749.845469ms 749.858671ms 749.877065ms 749.896091ms 749.903831ms 749.904335ms 749.957333ms 749.958903ms 749.991848ms 750.016908ms 750.029382ms 750.033539ms 750.065484ms 750.091557ms 750.099475ms 750.103575ms 750.105102ms 750.140357ms 750.182557ms 750.200179ms 750.20386ms 750.222736ms 750.236466ms 750.253538ms 750.270623ms 750.352013ms 750.375744ms 750.409649ms 750.411965ms 750.412896ms 750.426042ms 750.541098ms 750.556829ms 750.56611ms 750.632043ms 750.645994ms 750.647073ms 750.671725ms 750.676315ms 750.722119ms 750.748403ms 750.751627ms 750.769783ms 750.770515ms 750.774491ms 750.850297ms 750.873297ms 750.87391ms 750.883649ms 750.997699ms 751.11685ms 751.12371ms 751.126508ms 751.228794ms 751.275264ms 751.436441ms 751.439771ms 751.487249ms 751.551937ms 751.681845ms 751.806126ms 751.883534ms 752.454319ms 752.930575ms 752.970215ms 753.694126ms 754.528312ms 756.778837ms 757.305359ms 757.576531ms 757.582475ms 758.05995ms 758.067853ms 758.262435ms 758.777454ms 761.724743ms 763.91131ms 772.026806ms] +Jun 24 16:57:46.681: INFO: 50 %ile: 749.348425ms +Jun 24 16:57:46.682: INFO: 90 %ile: 751.551937ms +Jun 24 16:57:46.682: INFO: 99 %ile: 763.91131ms +Jun 24 16:57:46.682: INFO: Total sample count: 200 +[AfterEach] [sig-network] Service endpoints latency + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:57:46.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svc-latency-2752" for this suite. +Jun 24 16:58:00.697: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:58:00.778: INFO: namespace svc-latency-2752 deletion completed in 14.0921989s + +• [SLOW TEST:24.855 seconds] +[sig-network] Service endpoints latency +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:22 + should not be very high [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSS +------------------------------ +[k8s.io] InitContainer [NodeConformance] + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:58:00.778: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename init-container +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43 +[It] should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: creating the pod +Jun 24 16:58:00.815: INFO: PodSpec: initContainers in spec.initContainers +[AfterEach] [k8s.io] InitContainer [NodeConformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:58:05.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "init-container-814" for this suite. +Jun 24 16:58:11.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:58:11.338: INFO: namespace init-container-814 deletion completed in 6.107514538s + +• [SLOW TEST:10.560 seconds] +[k8s.io] InitContainer [NodeConformance] +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should invoke init containers on a RestartNever pod [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSS +------------------------------ +[sig-auth] ServiceAccounts + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:58:11.338: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename svcaccounts +STEP: Waiting for a default service account to be provisioned in namespace +[It] should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: getting the auto-created API token +STEP: reading a file in the container +Jun 24 16:58:13.926: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3030 pod-service-account-3ffd2768-96a1-11e9-8bcb-526dc0a539dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' +STEP: reading a file in the container +Jun 24 16:58:14.191: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3030 pod-service-account-3ffd2768-96a1-11e9-8bcb-526dc0a539dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' +STEP: reading a file in the container +Jun 24 16:58:14.447: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3030 pod-service-account-3ffd2768-96a1-11e9-8bcb-526dc0a539dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' +[AfterEach] [sig-auth] ServiceAccounts + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:58:14.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "svcaccounts-3030" for this suite. +Jun 24 16:58:20.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:58:20.821: INFO: namespace svcaccounts-3030 deletion completed in 6.103401274s + +• [SLOW TEST:9.483 seconds] +[sig-auth] ServiceAccounts +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:22 + should mount an API token into pods [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSS +------------------------------ +[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:58:20.822: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename container-lifecycle-hook +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:61 +STEP: create the container to handle the HTTPGet hook request. +[It] should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: create the pod with lifecycle hook +STEP: check poststart hook +STEP: delete the pod with lifecycle hook +Jun 24 16:58:26.940: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:26.944: INFO: Pod pod-with-poststart-http-hook still exists +Jun 24 16:58:28.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:28.949: INFO: Pod pod-with-poststart-http-hook still exists +Jun 24 16:58:30.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:30.949: INFO: Pod pod-with-poststart-http-hook still exists +Jun 24 16:58:32.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:32.948: INFO: Pod pod-with-poststart-http-hook still exists +Jun 24 16:58:34.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:34.950: INFO: Pod pod-with-poststart-http-hook still exists +Jun 24 16:58:36.945: INFO: Waiting for pod pod-with-poststart-http-hook to disappear +Jun 24 16:58:36.948: INFO: Pod pod-with-poststart-http-hook no longer exists +[AfterEach] [k8s.io] Container Lifecycle Hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:58:36.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "container-lifecycle-hook-8508" for this suite. +Jun 24 16:58:58.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:58:59.055: INFO: namespace container-lifecycle-hook-8508 deletion completed in 22.102036773s + +• [SLOW TEST:38.233 seconds] +[k8s.io] Container Lifecycle Hook +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + when create a pod with lifecycle hook + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:40 + should execute poststart http hook properly [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSS +------------------------------ +[k8s.io] Pods + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:58:59.056: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename pods +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:135 +[It] should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 16:59:01.149: INFO: Waiting up to 5m0s for pod "client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd" in namespace "pods-5372" to be "success or failure" +Jun 24 16:59:01.154: INFO: Pod "client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.703563ms +Jun 24 16:59:03.158: INFO: Pod "client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009554915s +STEP: Saw pod success +Jun 24 16:59:03.158: INFO: Pod "client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:59:03.162: INFO: Trying to get logs from node minion pod client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd container env3cont: +STEP: delete the pod +Jun 24 16:59:03.182: INFO: Waiting for pod client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:59:03.188: INFO: Pod client-envvars-5d567d36-96a1-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [k8s.io] Pods + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:59:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "pods-5372" for this suite. +Jun 24 16:59:49.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:59:49.293: INFO: namespace pods-5372 deletion completed in 46.095106244s + +• [SLOW TEST:50.237 seconds] +[k8s.io] Pods +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should contain environment variables for services [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +S +------------------------------ +[sig-storage] Downward API volume + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:59:49.293: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename downward-api +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 +[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a pod to test downward API volume plugin +Jun 24 16:59:49.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd" in namespace "downward-api-584" to be "success or failure" +Jun 24 16:59:49.347: INFO: Pod "downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.744816ms +Jun 24 16:59:51.353: INFO: Pod "downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009249408s +STEP: Saw pod success +Jun 24 16:59:51.353: INFO: Pod "downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure" +Jun 24 16:59:51.356: INFO: Trying to get logs from node minion pod downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd container client-container: +STEP: delete the pod +Jun 24 16:59:51.378: INFO: Waiting for pod downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd to disappear +Jun 24 16:59:51.381: INFO: Pod downwardapi-volume-7a0fb947-96a1-11e9-8bcb-526dc0a539dd no longer exists +[AfterEach] [sig-storage] Downward API volume + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 16:59:51.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "downward-api-584" for this suite. +Jun 24 16:59:57.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 16:59:57.483: INFO: namespace downward-api-584 deletion completed in 6.09824452s + +• [SLOW TEST:8.190 seconds] +[sig-storage] Downward API volume +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 + should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSS +------------------------------ +[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 16:59:57.485: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename statefulset +STEP: Waiting for a default service account to be provisioned in namespace +[BeforeEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59 +[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74 +STEP: Creating service test in namespace statefulset-1810 +[It] should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +STEP: Creating a new StatefulSet +Jun 24 16:59:57.530: INFO: Found 0 stateful pods, waiting for 3 +Jun 24 17:00:07.535: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 17:00:07.535: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 17:00:07.535: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true +Jun 24 17:00:07.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1810 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 17:00:07.875: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 17:00:07.875: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 17:00:07.875: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine +Jun 24 17:00:17.909: INFO: Updating stateful set ss2 +STEP: Creating a new revision +STEP: Updating Pods in reverse ordinal order +Jun 24 17:00:27.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1810 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 17:00:28.208: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 24 17:00:28.208: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 24 17:00:28.208: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 24 17:00:38.230: INFO: Waiting for StatefulSet statefulset-1810/ss2 to complete update +Jun 24 17:00:38.230: INFO: Waiting for Pod statefulset-1810/ss2-0 to have revision ss2-c79899b9 update revision ss2-787997d666 +Jun 24 17:00:48.239: INFO: Waiting for StatefulSet statefulset-1810/ss2 to complete update +STEP: Rolling back to a previous revision +Jun 24 17:00:58.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1810 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' +Jun 24 17:00:58.512: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" +Jun 24 17:00:58.512: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" +Jun 24 17:00:58.512: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' + +Jun 24 17:01:08.546: INFO: Updating stateful set ss2 +STEP: Rolling back update in reverse ordinal order +Jun 24 17:01:18.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1810 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' +Jun 24 17:01:18.839: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" +Jun 24 17:01:18.839: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" +Jun 24 17:01:18.839: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' + +Jun 24 17:01:18.863: INFO: Waiting for StatefulSet statefulset-1810/ss2 to complete update +Jun 24 17:01:18.863: INFO: Waiting for Pod statefulset-1810/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 24 17:01:18.863: INFO: Waiting for Pod statefulset-1810/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 24 17:01:18.863: INFO: Waiting for Pod statefulset-1810/ss2-2 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 24 17:01:28.871: INFO: Waiting for StatefulSet statefulset-1810/ss2 to complete update +Jun 24 17:01:28.871: INFO: Waiting for Pod statefulset-1810/ss2-0 to have revision ss2-787997d666 update revision ss2-c79899b9 +Jun 24 17:01:28.871: INFO: Waiting for Pod statefulset-1810/ss2-1 to have revision ss2-787997d666 update revision ss2-c79899b9 +[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85 +Jun 24 17:01:38.872: INFO: Deleting all statefulset in ns statefulset-1810 +Jun 24 17:01:38.875: INFO: Scaling statefulset ss2 to 0 +Jun 24 17:01:58.890: INFO: Waiting for statefulset status.replicas updated to 0 +Jun 24 17:01:58.894: INFO: Deleting statefulset ss2 +[AfterEach] [sig-apps] StatefulSet + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 +Jun 24 17:01:58.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready +STEP: Destroying namespace "statefulset-1810" for this suite. +Jun 24 17:02:04.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered +Jun 24 17:02:05.011: INFO: namespace statefulset-1810 deletion completed in 6.101060136s + +• [SLOW TEST:127.526 seconds] +[sig-apps] StatefulSet +/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22 + [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687 + should perform rolling updates and roll backs of template modifications [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +------------------------------ +SSSSSSSSSSSSSSSS +------------------------------ +[sig-network] Proxy version v1 + should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +[BeforeEach] version v1 + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149 +STEP: Creating a kubernetes client +Jun 24 17:02:05.012: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415 +STEP: Building a namespace api object, basename proxy +STEP: Waiting for a default service account to be provisioned in namespace +[It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] + /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 +Jun 24 17:02:05.067: INFO: (0) /api/v1/nodes/minion:10250/proxy/logs/:
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+apt/
+auth.log
+btmp
+>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[BeforeEach] [k8s.io] Kubectl label
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1108
+STEP: creating the pod
+Jun 24 17:02:11.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-2124'
+Jun 24 17:02:12.106: INFO: stderr: ""
+Jun 24 17:02:12.106: INFO: stdout: "pod/pause created\n"
+Jun 24 17:02:12.106: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
+Jun 24 17:02:12.106: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2124" to be "running and ready"
+Jun 24 17:02:12.110: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063379ms
+Jun 24 17:02:14.114: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008031159s
+Jun 24 17:02:16.118: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011784712s
+Jun 24 17:02:16.118: INFO: Pod "pause" satisfied condition "running and ready"
+Jun 24 17:02:16.118: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
+[It] should update the label on a resource  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: adding the label testing-label with value testing-label-value to a pod
+Jun 24 17:02:16.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 label pods pause testing-label=testing-label-value --namespace=kubectl-2124'
+Jun 24 17:02:16.218: INFO: stderr: ""
+Jun 24 17:02:16.218: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod has the label testing-label with the value testing-label-value
+Jun 24 17:02:16.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pod pause -L testing-label --namespace=kubectl-2124'
+Jun 24 17:02:16.308: INFO: stderr: ""
+Jun 24 17:02:16.308: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
+STEP: removing the label testing-label of a pod
+Jun 24 17:02:16.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 label pods pause testing-label- --namespace=kubectl-2124'
+Jun 24 17:02:16.421: INFO: stderr: ""
+Jun 24 17:02:16.421: INFO: stdout: "pod/pause labeled\n"
+STEP: verifying the pod doesn't have the label testing-label
+Jun 24 17:02:16.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pod pause -L testing-label --namespace=kubectl-2124'
+Jun 24 17:02:16.518: INFO: stderr: ""
+Jun 24 17:02:16.518: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
+[AfterEach] [k8s.io] Kubectl label
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1115
+STEP: using delete to clean up resources
+Jun 24 17:02:16.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 delete --grace-period=0 --force -f - --namespace=kubectl-2124'
+Jun 24 17:02:16.635: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
+Jun 24 17:02:16.635: INFO: stdout: "pod \"pause\" force deleted\n"
+Jun 24 17:02:16.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get rc,svc -l name=pause --no-headers --namespace=kubectl-2124'
+Jun 24 17:02:16.737: INFO: stderr: "No resources found.\n"
+Jun 24 17:02:16.737: INFO: stdout: ""
+Jun 24 17:02:16.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 get pods -l name=pause --namespace=kubectl-2124 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
+Jun 24 17:02:16.831: INFO: stderr: ""
+Jun 24 17:02:16.831: INFO: stdout: ""
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:02:16.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-2124" for this suite.
+Jun 24 17:02:22.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:02:22.933: INFO: namespace kubectl-2124 deletion completed in 6.098098949s
+
+• [SLOW TEST:11.659 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl label
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should update the label on a resource  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+S
+------------------------------
+[sig-api-machinery] Namespaces [Serial] 
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:02:22.933: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename namespaces
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a test namespace
+STEP: Waiting for a default service account to be provisioned in namespace
+STEP: Creating a pod in the namespace
+STEP: Waiting for the pod to have running status
+STEP: Deleting the namespace
+STEP: Waiting for the namespace to be removed.
+STEP: Recreating the namespace
+STEP: Verifying there are no pods in the namespace
+[AfterEach] [sig-api-machinery] Namespaces [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:02:47.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "namespaces-7374" for this suite.
+Jun 24 17:02:53.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:02:53.156: INFO: namespace namespaces-7374 deletion completed in 6.09703579s
+STEP: Destroying namespace "nsdeletetest-3737" for this suite.
+Jun 24 17:02:53.159: INFO: Namespace nsdeletetest-3737 was already deleted
+STEP: Destroying namespace "nsdeletetest-2753" for this suite.
+Jun 24 17:02:59.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:02:59.256: INFO: namespace nsdeletetest-2753 deletion completed in 6.097704368s
+
+• [SLOW TEST:36.323 seconds]
+[sig-api-machinery] Namespaces [Serial]
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should ensure that all pods are removed when a namespace is deleted [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSS
+------------------------------
+[sig-storage] EmptyDir volumes 
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:02:59.257: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename emptydir
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test emptydir 0644 on tmpfs
+Jun 24 17:02:59.296: INFO: Waiting up to 5m0s for pod "pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd" in namespace "emptydir-6349" to be "success or failure"
+Jun 24 17:02:59.299: INFO: Pod "pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.902033ms
+Jun 24 17:03:01.303: INFO: Pod "pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00699444s
+Jun 24 17:03:03.307: INFO: Pod "pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011131624s
+STEP: Saw pod success
+Jun 24 17:03:03.307: INFO: Pod "pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 17:03:03.311: INFO: Trying to get logs from node minion pod pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd container test-container: 
+STEP: delete the pod
+Jun 24 17:03:03.353: INFO: Waiting for pod pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 17:03:03.357: INFO: Pod pod-eb49dbb7-96a1-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] EmptyDir volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:03:03.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "emptydir-6349" for this suite.
+Jun 24 17:03:09.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:03:09.452: INFO: namespace emptydir-6349 deletion completed in 6.090261978s
+
+• [SLOW TEST:10.196 seconds]
+[sig-storage] EmptyDir volumes
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
+  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:03:09.452: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating the pod
+Jun 24 17:03:14.030: INFO: Successfully updated pod "annotationupdatef15e27be-96a1-11e9-8bcb-526dc0a539dd"
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:03:16.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-1893" for this suite.
+Jun 24 17:03:38.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:03:38.162: INFO: namespace projected-1893 deletion completed in 22.095989045s
+
+• [SLOW TEST:28.709 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should update annotations on modification [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-apps] Daemon set [Serial] 
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:03:38.163: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename daemonsets
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
+[It] should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+Jun 24 17:03:38.211: INFO: Conformance test suite needs a cluster with at least 2 nodes.
+Jun 24 17:03:38.211: INFO: Create a RollingUpdate DaemonSet
+Jun 24 17:03:38.215: INFO: Check that daemon pods launch on every node of the cluster
+Jun 24 17:03:38.219: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:38.222: INFO: Number of nodes with available pods: 0
+Jun 24 17:03:38.223: INFO: Node minion is running more than one daemon pod
+Jun 24 17:03:39.228: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:39.234: INFO: Number of nodes with available pods: 0
+Jun 24 17:03:39.234: INFO: Node minion is running more than one daemon pod
+Jun 24 17:03:40.228: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:40.232: INFO: Number of nodes with available pods: 0
+Jun 24 17:03:40.232: INFO: Node minion is running more than one daemon pod
+Jun 24 17:03:41.227: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:41.230: INFO: Number of nodes with available pods: 1
+Jun 24 17:03:41.230: INFO: Number of running nodes: 1, number of available pods: 1
+Jun 24 17:03:41.230: INFO: Update the DaemonSet to trigger a rollout
+Jun 24 17:03:41.239: INFO: Updating DaemonSet daemon-set
+Jun 24 17:03:47.266: INFO: Roll back the DaemonSet before rollout is complete
+Jun 24 17:03:47.274: INFO: Updating DaemonSet daemon-set
+Jun 24 17:03:47.274: INFO: Make sure DaemonSet rollback is complete
+Jun 24 17:03:47.290: INFO: Wrong image for pod: daemon-set-b6m2n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Jun 24 17:03:47.290: INFO: Pod daemon-set-b6m2n is not available
+Jun 24 17:03:47.299: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:48.303: INFO: Wrong image for pod: daemon-set-b6m2n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Jun 24 17:03:48.303: INFO: Pod daemon-set-b6m2n is not available
+Jun 24 17:03:48.310: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:49.305: INFO: Wrong image for pod: daemon-set-b6m2n. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
+Jun 24 17:03:49.305: INFO: Pod daemon-set-b6m2n is not available
+Jun 24 17:03:49.309: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+Jun 24 17:03:50.304: INFO: Pod daemon-set-4dxgh is not available
+Jun 24 17:03:50.307: INFO: DaemonSet pods can't tolerate node master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
+STEP: Deleting DaemonSet "daemon-set"
+STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1557, will wait for the garbage collector to delete the pods
+Jun 24 17:03:50.374: INFO: Deleting DaemonSet.extensions daemon-set took: 6.509641ms
+Jun 24 17:03:50.674: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.292681ms
+Jun 24 17:03:54.877: INFO: Number of nodes with available pods: 0
+Jun 24 17:03:54.877: INFO: Number of running nodes: 0, number of available pods: 0
+Jun 24 17:03:54.880: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1557/daemonsets","resourceVersion":"19827"},"items":null}
+
+Jun 24 17:03:54.883: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1557/pods","resourceVersion":"19827"},"items":null}
+
+[AfterEach] [sig-apps] Daemon set [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:03:54.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "daemonsets-1557" for this suite.
+Jun 24 17:04:00.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:04:00.996: INFO: namespace daemonsets-1557 deletion completed in 6.099682814s
+
+• [SLOW TEST:22.833 seconds]
+[sig-apps] Daemon set [Serial]
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should rollback without unnecessary restarts [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] ReplicationController 
+  should release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:04:00.998: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename replication-controller
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Given a ReplicationController is created
+STEP: When the matched label of one of its pods change
+Jun 24 17:04:01.039: INFO: Pod name pod-release: Found 0 pods out of 1
+Jun 24 17:04:06.043: INFO: Pod name pod-release: Found 1 pods out of 1
+STEP: Then the pod is released
+[AfterEach] [sig-apps] ReplicationController
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:04:07.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "replication-controller-6991" for this suite.
+Jun 24 17:04:13.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:04:13.178: INFO: namespace replication-controller-6991 deletion completed in 6.104178149s
+
+• [SLOW TEST:12.181 seconds]
+[sig-apps] ReplicationController
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  should release no longer matching pods [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Projected downwardAPI 
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:04:13.179: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename projected
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
+[It] should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating a pod to test downward API volume plugin
+Jun 24 17:04:13.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd" in namespace "projected-6480" to be "success or failure"
+Jun 24 17:04:13.233: INFO: Pod "downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595442ms
+Jun 24 17:04:15.237: INFO: Pod "downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006786779s
+STEP: Saw pod success
+Jun 24 17:04:15.237: INFO: Pod "downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 17:04:15.241: INFO: Trying to get logs from node minion pod downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd container client-container: 
+STEP: delete the pod
+Jun 24 17:04:15.271: INFO: Waiting for pod downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 17:04:15.275: INFO: Pod downwardapi-volume-175b70fc-96a2-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Projected downwardAPI
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:04:15.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "projected-6480" for this suite.
+Jun 24 17:04:21.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:04:21.378: INFO: namespace projected-6480 deletion completed in 6.09733561s
+
+• [SLOW TEST:8.199 seconds]
+[sig-storage] Projected downwardAPI
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
+  should provide container's memory request [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSS
+------------------------------
+[sig-scheduling] SchedulerPredicates [Serial] 
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:04:21.378: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename sched-pred
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
+Jun 24 17:04:21.425: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
+Jun 24 17:04:21.436: INFO: Waiting for terminating namespaces to be deleted...
+Jun 24 17:04:21.438: INFO: 
+Logging pods the kubelet thinks is on node minion before test
+Jun 24 17:04:21.448: INFO: weave-net-p4t4q from kube-system started at 2019-06-24 15:29:30 +0000 UTC (2 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container weave ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: 	Container weave-npc ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: kubernetes-dashboard-6c7466966c-v95zd from kube-system started at 2019-06-24 15:30:10 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container kubernetes-dashboard ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: weave-scope-app-5bcb7f46b9-pv6gl from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container app ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: weave-scope-agent-mmtsr from weave started at 2019-06-24 15:30:48 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container agent ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: sonobuoy-systemd-logs-daemon-set-7e1461ca4731443f-8ql79 from heptio-sonobuoy started at 2019-06-24 15:31:43 +0000 UTC (2 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container sonobuoy-systemd-logs-config ready: true, restart count 1
+Jun 24 17:04:21.448: INFO: 	Container sonobuoy-worker ready: true, restart count 1
+Jun 24 17:04:21.448: INFO: kube-proxy-d8w54 from kube-system started at 2019-06-24 15:29:46 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container kube-proxy ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: nginx-proxy-minion from kube-system started at  (0 container statuses recorded)
+Jun 24 17:04:21.448: INFO: coredns-97c4b444f-9954l from kube-system started at 2019-06-24 15:30:06 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container coredns ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: nodelocaldns-vmsgk from kube-system started at 2019-06-24 15:30:09 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container node-cache ready: true, restart count 0
+Jun 24 17:04:21.448: INFO: sonobuoy from heptio-sonobuoy started at 2019-06-24 15:31:39 +0000 UTC (1 container statuses recorded)
+Jun 24 17:04:21.448: INFO: 	Container kube-sonobuoy ready: true, restart count 0
+[It] validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: verifying the node has the label node minion
+Jun 24 17:04:21.473: INFO: Pod sonobuoy requesting resource cpu=0m on Node minion
+Jun 24 17:04:21.473: INFO: Pod sonobuoy-systemd-logs-daemon-set-7e1461ca4731443f-8ql79 requesting resource cpu=0m on Node minion
+Jun 24 17:04:21.473: INFO: Pod coredns-97c4b444f-9954l requesting resource cpu=100m on Node minion
+Jun 24 17:04:21.473: INFO: Pod kube-proxy-d8w54 requesting resource cpu=0m on Node minion
+Jun 24 17:04:21.473: INFO: Pod kubernetes-dashboard-6c7466966c-v95zd requesting resource cpu=50m on Node minion
+Jun 24 17:04:21.473: INFO: Pod nginx-proxy-minion requesting resource cpu=25m on Node minion
+Jun 24 17:04:21.473: INFO: Pod nodelocaldns-vmsgk requesting resource cpu=100m on Node minion
+Jun 24 17:04:21.473: INFO: Pod weave-net-p4t4q requesting resource cpu=20m on Node minion
+Jun 24 17:04:21.473: INFO: Pod weave-scope-agent-mmtsr requesting resource cpu=0m on Node minion
+Jun 24 17:04:21.473: INFO: Pod weave-scope-app-5bcb7f46b9-pv6gl requesting resource cpu=0m on Node minion
+STEP: Starting Pods to consume most of the cluster CPU.
+STEP: Creating another pod that requires unavailable amount of CPU.
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd.15ab31436d939875], Reason = [Scheduled], Message = [Successfully assigned sched-pred-2480/filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd to minion]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd.15ab3143a38390f7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd.15ab3143a8887283], Reason = [Created], Message = [Created container filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd]
+STEP: Considering event: 
+Type = [Normal], Name = [filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd.15ab3143b642b9af], Reason = [Started], Message = [Started container filler-pod-1c461c52-96a2-11e9-8bcb-526dc0a539dd]
+STEP: Considering event: 
+Type = [Warning], Name = [additional-pod.15ab3143e5c13c73], Reason = [FailedScheduling], Message = [0/2 nodes are available: 1 Insufficient cpu, 1 node(s) had taints that the pod didn't tolerate.]
+STEP: removing the label node off the node minion
+STEP: verifying the node doesn't have the label node
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:04:24.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "sched-pred-2480" for this suite.
+Jun 24 17:04:30.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:04:30.635: INFO: namespace sched-pred-2480 deletion completed in 6.108082831s
+[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:70
+
+• [SLOW TEST:9.257 seconds]
+[sig-scheduling] SchedulerPredicates [Serial]
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:22
+  validates resource limits of pods that are allowed to run  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSS
+------------------------------
+[sig-api-machinery] Garbage collector 
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:04:30.635: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename gc
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: create the deployment
+STEP: Wait for the Deployment to create new ReplicaSet
+STEP: delete the deployment
+STEP: wait for all rs to be garbage collected
+STEP: expected 0 rs, got 1 rs
+STEP: expected 0 pods, got 2 pods
+STEP: Gathering metrics
+Jun 24 17:04:31.259: INFO: For apiserver_request_total:
+For apiserver_request_latencies_summary:
+For apiserver_init_events_total:
+For garbage_collector_attempt_to_delete_queue_latency:
+	[quantile=0.5] = 12
+	[quantile=0.9] = 157
+	[quantile=0.99] = 157
+For garbage_collector_attempt_to_delete_work_duration:
+	[quantile=0.5] = 213684
+	[quantile=0.9] = 221723
+	[quantile=0.99] = 221723
+For garbage_collector_attempt_to_orphan_queue_latency:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_attempt_to_orphan_work_duration:
+	[quantile=0.5] = NaN
+	[quantile=0.9] = NaN
+	[quantile=0.99] = NaN
+For garbage_collector_dirty_processing_latency_microseconds:
+For garbage_collector_event_processing_latency_microseconds:
+For garbage_collector_graph_changes_queue_latency:
+	[quantile=0.5] = 5
+	[quantile=0.9] = 8
+	[quantile=0.99] = 40
+For garbage_collector_graph_changes_work_duration:
+	[quantile=0.5] = 14
+	[quantile=0.9] = 28
+	[quantile=0.99] = 74
+For garbage_collector_orphan_processing_latency_microseconds:
+For namespace_queue_latency:
+	[quantile=0.5] = 15
+	[quantile=0.9] = 36
+	[quantile=0.99] = 48
+For namespace_queue_latency_sum:
+	[] = 10340
+For namespace_queue_latency_count:
+	[] = 536
+For namespace_retries:
+	[] = 546
+For namespace_work_duration:
+	[quantile=0.5] = 163838
+	[quantile=0.9] = 258773
+	[quantile=0.99] = 607269
+For namespace_work_duration_sum:
+	[] = 82014988
+For namespace_work_duration_count:
+	[] = 536
+For function_duration_seconds:
+For errors_total:
+For evicted_pods_total:
+
+[AfterEach] [sig-api-machinery] Garbage collector
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:04:31.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "gc-1485" for this suite.
+Jun 24 17:04:37.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:04:37.374: INFO: namespace gc-1485 deletion completed in 6.107839617s
+
+• [SLOW TEST:6.739 seconds]
+[sig-api-machinery] Garbage collector
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:22
+  should delete RS created by deployment when not orphaning [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSS
+------------------------------
+[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
+  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:04:37.374: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename statefulset
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:59
+[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:74
+STEP: Creating service test in namespace statefulset-1449
+[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Initializing watcher for selector baz=blah,foo=bar
+STEP: Creating stateful set ss in namespace statefulset-1449
+STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1449
+Jun 24 17:04:37.435: INFO: Found 0 stateful pods, waiting for 1
+Jun 24 17:04:47.439: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
+Jun 24 17:04:47.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 24 17:04:47.723: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Jun 24 17:04:47.723: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 24 17:04:47.723: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 24 17:04:47.727: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
+Jun 24 17:04:57.732: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun 24 17:04:57.732: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 24 17:04:57.746: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999566s
+Jun 24 17:04:58.750: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996477703s
+Jun 24 17:04:59.754: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992267118s
+Jun 24 17:05:00.759: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988330921s
+Jun 24 17:05:01.763: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983746866s
+Jun 24 17:05:02.767: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.979315919s
+Jun 24 17:05:03.771: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974950754s
+Jun 24 17:05:04.776: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.970929397s
+Jun 24 17:05:05.780: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.966480488s
+Jun 24 17:05:06.784: INFO: Verifying statefulset ss doesn't scale past 1 for another 962.038237ms
+STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1449
+Jun 24 17:05:07.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:08.051: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Jun 24 17:05:08.051: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 24 17:05:08.051: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 24 17:05:08.057: INFO: Found 1 stateful pods, waiting for 3
+Jun 24 17:05:18.061: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
+Jun 24 17:05:18.061: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
+Jun 24 17:05:18.061: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
+STEP: Verifying that stateful set ss was scaled up in order
+STEP: Scale down will halt with unhealthy stateful pod
+Jun 24 17:05:18.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 24 17:05:18.342: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Jun 24 17:05:18.343: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 24 17:05:18.343: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 24 17:05:18.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 24 17:05:18.622: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Jun 24 17:05:18.622: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 24 17:05:18.622: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 24 17:05:18.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
+Jun 24 17:05:18.902: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
+Jun 24 17:05:18.902: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
+Jun 24 17:05:18.902: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'
+
+Jun 24 17:05:18.902: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 24 17:05:18.905: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
+Jun 24 17:05:28.914: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
+Jun 24 17:05:28.914: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
+Jun 24 17:05:28.914: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
+Jun 24 17:05:28.932: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999436s
+Jun 24 17:05:29.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990973117s
+Jun 24 17:05:30.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986214514s
+Jun 24 17:05:31.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981357839s
+Jun 24 17:05:32.951: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976598272s
+Jun 24 17:05:33.956: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972006214s
+Jun 24 17:05:34.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967174666s
+Jun 24 17:05:35.972: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.955382849s
+Jun 24 17:05:36.977: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.950359359s
+Jun 24 17:05:37.982: INFO: Verifying statefulset ss doesn't scale past 3 for another 945.471694ms
+STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1449
+Jun 24 17:05:38.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:39.285: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Jun 24 17:05:39.285: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 24 17:05:39.285: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 24 17:05:39.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:39.557: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
+Jun 24 17:05:39.557: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
+Jun 24 17:05:39.557: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'
+
+Jun 24 17:05:39.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:39.728: INFO: rc: 126
+Jun 24 17:05:39.728: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
+ command terminated with exit code 126
+ []  0xc001a21770 exit status 126   true [0xc000172af8 0xc000172be0 0xc000172d98] [0xc000172af8 0xc000172be0 0xc000172d98] [0xc000172b78 0xc000172d50] [0x9c00a0 0x9c00a0] 0xc002659920 }:
+Command stdout:
+cannot exec in a stopped state: unknown
+
+stderr:
+command terminated with exit code 126
+
+error:
+exit status 126
+
+Jun 24 17:05:49.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:49.824: INFO: rc: 1
+Jun 24 17:05:49.824: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002ef4e70 exit status 1   true [0xc000010950 0xc0000109a8 0xc000010a88] [0xc000010950 0xc0000109a8 0xc000010a88] [0xc000010998 0xc000010a00] [0x9c00a0 0x9c00a0] 0xc0017344e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:05:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:05:59.919: INFO: rc: 1
+Jun 24 17:05:59.919: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c311d0 exit status 1   true [0xc0015f4028 0xc0015f4040 0xc0015f4058] [0xc0015f4028 0xc0015f4040 0xc0015f4058] [0xc0015f4038 0xc0015f4050] [0x9c00a0 0x9c00a0] 0xc003418c60 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:06:09.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:06:10.014: INFO: rc: 1
+Jun 24 17:06:10.014: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002ef5200 exit status 1   true [0xc000010b00 0xc000010bb0 0xc000010c10] [0xc000010b00 0xc000010bb0 0xc000010c10] [0xc000010ba0 0xc000010bf0] [0x9c00a0 0x9c00a0] 0xc001734840 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:06:20.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:06:20.128: INFO: rc: 1
+Jun 24 17:06:20.128: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a21aa0 exit status 1   true [0xc000172e08 0xc000172f48 0xc000173108] [0xc000172e08 0xc000172f48 0xc000173108] [0xc000172e98 0xc000173078] [0x9c00a0 0x9c00a0] 0xc002659ce0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:06:30.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:06:30.221: INFO: rc: 1
+Jun 24 17:06:30.222: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c31500 exit status 1   true [0xc0015f4060 0xc0015f4078 0xc0015f4090] [0xc0015f4060 0xc0015f4078 0xc0015f4090] [0xc0015f4070 0xc0015f4088] [0x9c00a0 0x9c00a0] 0xc003419320 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:06:40.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:06:40.317: INFO: rc: 1
+Jun 24 17:06:40.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a21fb0 exit status 1   true [0xc000173158 0xc0001732b8 0xc0001733a8] [0xc000173158 0xc0001732b8 0xc0001733a8] [0xc0001732a8 0xc000173318] [0x9c00a0 0x9c00a0] 0xc003232060 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:06:50.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:06:50.409: INFO: rc: 1
+Jun 24 17:06:50.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002664780 exit status 1   true [0xc001a8a028 0xc001a8a040 0xc001a8a058] [0xc001a8a028 0xc001a8a040 0xc001a8a058] [0xc001a8a038 0xc001a8a050] [0x9c00a0 0x9c00a0] 0xc00259e300 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:00.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:00.500: INFO: rc: 1
+Jun 24 17:07:00.500: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c31890 exit status 1   true [0xc0015f4098 0xc0015f40b0 0xc0015f40c8] [0xc0015f4098 0xc0015f40b0 0xc0015f40c8] [0xc0015f40a8 0xc0015f40c0] [0x9c00a0 0x9c00a0] 0xc0034199e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:10.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:10.591: INFO: rc: 1
+Jun 24 17:07:10.591: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c31bf0 exit status 1   true [0xc0015f40d0 0xc0015f40e8 0xc0015f4100] [0xc0015f40d0 0xc0015f40e8 0xc0015f4100] [0xc0015f40e0 0xc0015f40f8] [0x9c00a0 0x9c00a0] 0xc0014ac0c0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:20.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:20.681: INFO: rc: 1
+Jun 24 17:07:20.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002de2420 exit status 1   true [0xc0001733d0 0xc000173450 0xc000173530] [0xc0001733d0 0xc000173450 0xc000173530] [0xc0001733f8 0xc000173510] [0x9c00a0 0x9c00a0] 0xc003232480 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:30.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:30.772: INFO: rc: 1
+Jun 24 17:07:30.772: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c31f50 exit status 1   true [0xc0015f4108 0xc0015f4120 0xc0015f4138] [0xc0015f4108 0xc0015f4120 0xc0015f4138] [0xc0015f4118 0xc0015f4130] [0x9c00a0 0x9c00a0] 0xc0014ac660 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:40.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:40.859: INFO: rc: 1
+Jun 24 17:07:40.860: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a20300 exit status 1   true [0xc0015f4008 0xc0015f4020 0xc0015f4038] [0xc0015f4008 0xc0015f4020 0xc0015f4038] [0xc0015f4018 0xc0015f4030] [0x9c00a0 0x9c00a0] 0xc0034185a0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:07:50.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:07:50.956: INFO: rc: 1
+Jun 24 17:07:50.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c30300 exit status 1   true [0xc000172000 0xc000172148 0xc000172298] [0xc000172000 0xc000172148 0xc000172298] [0xc000172100 0xc000172240] [0x9c00a0 0x9c00a0] 0xc002658960 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:00.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:01.051: INFO: rc: 1
+Jun 24 17:08:01.052: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c30660 exit status 1   true [0xc000172370 0xc0001723e0 0xc000172ae8] [0xc000172370 0xc0001723e0 0xc000172ae8] [0xc000172390 0xc000172a98] [0x9c00a0 0x9c00a0] 0xc0026590e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:11.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:11.142: INFO: rc: 1
+Jun 24 17:08:11.142: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c30990 exit status 1   true [0xc000172af8 0xc000172be0 0xc000172d98] [0xc000172af8 0xc000172be0 0xc000172d98] [0xc000172b78 0xc000172d50] [0x9c00a0 0x9c00a0] 0xc0026596e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:21.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:21.236: INFO: rc: 1
+Jun 24 17:08:21.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c30cf0 exit status 1   true [0xc000172e08 0xc000172f48 0xc000173108] [0xc000172e08 0xc000172f48 0xc000173108] [0xc000172e98 0xc000173078] [0x9c00a0 0x9c00a0] 0xc002659aa0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:31.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:31.324: INFO: rc: 1
+Jun 24 17:08:31.324: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a20660 exit status 1   true [0xc0015f4040 0xc0015f4058 0xc0015f4070] [0xc0015f4040 0xc0015f4058 0xc0015f4070] [0xc0015f4050 0xc0015f4068] [0x9c00a0 0x9c00a0] 0xc003418c60 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:41.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:41.414: INFO: rc: 1
+Jun 24 17:08:41.414: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c31080 exit status 1   true [0xc000173158 0xc0001732b8 0xc0001733a8] [0xc000173158 0xc0001732b8 0xc0001733a8] [0xc0001732a8 0xc000173318] [0x9c00a0 0x9c00a0] 0xc002659e60 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:08:51.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:08:51.503: INFO: rc: 1
+Jun 24 17:08:51.504: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a20e40 exit status 1   true [0xc0015f4078 0xc0015f4090 0xc0015f40a8] [0xc0015f4078 0xc0015f4090 0xc0015f40a8] [0xc0015f4088 0xc0015f40a0] [0x9c00a0 0x9c00a0] 0xc003419320 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:01.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:01.613: INFO: rc: 1
+Jun 24 17:09:01.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002de2480 exit status 1   true [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000100b8 0xc0000106c8 0xc0000107e8] [0xc0000105b0 0xc000010730] [0x9c00a0 0x9c00a0] 0xc0014ac4e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:11.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:11.712: INFO: rc: 1
+Jun 24 17:09:11.712: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c313e0 exit status 1   true [0xc0001733d0 0xc000173450 0xc000173530] [0xc0001733d0 0xc000173450 0xc000173530] [0xc0001733f8 0xc000173510] [0x9c00a0 0x9c00a0] 0xc003232240 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:21.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:21.800: INFO: rc: 1
+Jun 24 17:09:21.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002de27e0 exit status 1   true [0xc000010878 0xc000010950 0xc0000109a8] [0xc000010878 0xc000010950 0xc0000109a8] [0xc0000108f8 0xc000010998] [0x9c00a0 0x9c00a0] 0xc0014acb40 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:31.889: INFO: rc: 1
+Jun 24 17:09:31.889: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002ef4300 exit status 1   true [0xc001a8a000 0xc001a8a018 0xc001a8a030] [0xc001a8a000 0xc001a8a018 0xc001a8a030] [0xc001a8a010 0xc001a8a028] [0x9c00a0 0x9c00a0] 0xc0017342a0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:41.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:41.993: INFO: rc: 1
+Jun 24 17:09:41.993: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a20330 exit status 1   true [0xc0015f4008 0xc0015f4020 0xc0015f4038] [0xc0015f4008 0xc0015f4020 0xc0015f4038] [0xc0015f4018 0xc0015f4030] [0x9c00a0 0x9c00a0] 0xc002658960 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:09:51.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:09:52.093: INFO: rc: 1
+Jun 24 17:09:52.093: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002c30330 exit status 1   true [0xc000172000 0xc000172148 0xc000172298] [0xc000172000 0xc000172148 0xc000172298] [0xc000172100 0xc000172240] [0x9c00a0 0x9c00a0] 0xc0034185a0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:10:02.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:10:02.182: INFO: rc: 1
+Jun 24 17:10:02.183: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a206c0 exit status 1   true [0xc0015f4040 0xc0015f4058 0xc0015f4070] [0xc0015f4040 0xc0015f4058 0xc0015f4070] [0xc0015f4050 0xc0015f4068] [0x9c00a0 0x9c00a0] 0xc0026590e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:10:12.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:10:12.277: INFO: rc: 1
+Jun 24 17:10:12.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a20ea0 exit status 1   true [0xc0015f4078 0xc0015f4090 0xc0015f40a8] [0xc0015f4078 0xc0015f4090 0xc0015f40a8] [0xc0015f4088 0xc0015f40a0] [0x9c00a0 0x9c00a0] 0xc0026596e0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:10:22.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:10:22.371: INFO: rc: 1
+Jun 24 17:10:22.371: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc001a211d0 exit status 1   true [0xc0015f40b0 0xc0015f40c8 0xc0015f40e0] [0xc0015f40b0 0xc0015f40c8 0xc0015f40e0] [0xc0015f40c0 0xc0015f40d8] [0x9c00a0 0x9c00a0] 0xc002659aa0 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:10:32.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:10:32.465: INFO: rc: 1
+Jun 24 17:10:32.465: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
+ []  0xc002de2420 exit status 1   true [0xc001a8a000 0xc001a8a018 0xc001a8a030] [0xc001a8a000 0xc001a8a018 0xc001a8a030] [0xc001a8a010 0xc001a8a028] [0x9c00a0 0x9c00a0] 0xc003232360 }:
+Command stdout:
+
+stderr:
+Error from server (NotFound): pods "ss-2" not found
+
+error:
+exit status 1
+
+Jun 24 17:10:42.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 exec --namespace=statefulset-1449 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
+Jun 24 17:10:42.555: INFO: rc: 1
+Jun 24 17:10:42.555: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
+Jun 24 17:10:42.555: INFO: Scaling statefulset ss to 0
+STEP: Verifying that stateful set ss was scaled down in reverse order
+[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:85
+Jun 24 17:10:42.569: INFO: Deleting all statefulset in ns statefulset-1449
+Jun 24 17:10:42.571: INFO: Scaling statefulset ss to 0
+Jun 24 17:10:42.578: INFO: Waiting for statefulset status.replicas updated to 0
+Jun 24 17:10:42.580: INFO: Deleting statefulset ss
+[AfterEach] [sig-apps] StatefulSet
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:10:42.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "statefulset-1449" for this suite.
+Jun 24 17:10:48.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:10:48.703: INFO: namespace statefulset-1449 deletion completed in 6.110047248s
+
+• [SLOW TEST:371.329 seconds]
+[sig-apps] StatefulSet
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:22
+  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+[sig-storage] Subpath Atomic writer volumes 
+  should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:10:48.703: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename subpath
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
+STEP: Setting up data
+[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating pod pod-subpath-test-secret-gjp5
+STEP: Creating a pod to test atomic-volume-subpath
+Jun 24 17:10:48.769: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-gjp5" in namespace "subpath-5030" to be "success or failure"
+Jun 24 17:10:48.773: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.267023ms
+Jun 24 17:10:50.777: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007562462s
+Jun 24 17:10:52.781: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 4.011422271s
+Jun 24 17:10:54.785: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 6.015794552s
+Jun 24 17:10:56.789: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 8.01979322s
+Jun 24 17:10:58.793: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 10.023961624s
+Jun 24 17:11:00.798: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 12.02850197s
+Jun 24 17:11:02.802: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 14.032435178s
+Jun 24 17:11:04.806: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 16.036648412s
+Jun 24 17:11:06.811: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 18.041119371s
+Jun 24 17:11:08.815: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Running", Reason="", readiness=true. Elapsed: 20.045587652s
+Jun 24 17:11:10.819: INFO: Pod "pod-subpath-test-secret-gjp5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.049826554s
+STEP: Saw pod success
+Jun 24 17:11:10.819: INFO: Pod "pod-subpath-test-secret-gjp5" satisfied condition "success or failure"
+Jun 24 17:11:10.823: INFO: Trying to get logs from node minion pod pod-subpath-test-secret-gjp5 container test-container-subpath-secret-gjp5: 
+STEP: delete the pod
+Jun 24 17:11:10.848: INFO: Waiting for pod pod-subpath-test-secret-gjp5 to disappear
+Jun 24 17:11:10.851: INFO: Pod pod-subpath-test-secret-gjp5 no longer exists
+STEP: Deleting pod pod-subpath-test-secret-gjp5
+Jun 24 17:11:10.851: INFO: Deleting pod "pod-subpath-test-secret-gjp5" in namespace "subpath-5030"
+[AfterEach] [sig-storage] Subpath
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:11:10.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "subpath-5030" for this suite.
+Jun 24 17:11:16.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:11:16.960: INFO: namespace subpath-5030 deletion completed in 6.103429223s
+
+• [SLOW TEST:28.257 seconds]
+[sig-storage] Subpath
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
+  Atomic writer volumes
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
+    should support subpaths with secret pod [LinuxOnly] [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-cli] Kubectl client [k8s.io] Kubectl patch 
+  should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:11:16.961: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename kubectl
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:213
+[It] should add annotations for pods in rc  [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating Redis RC
+Jun 24 17:11:16.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 create -f - --namespace=kubectl-6522'
+Jun 24 17:11:17.286: INFO: stderr: ""
+Jun 24 17:11:17.286: INFO: stdout: "replicationcontroller/redis-master created\n"
+STEP: Waiting for Redis master to start.
+Jun 24 17:11:18.290: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 17:11:18.290: INFO: Found 0 / 1
+Jun 24 17:11:19.290: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 17:11:19.290: INFO: Found 1 / 1
+Jun 24 17:11:19.290: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
+STEP: patching all pods
+Jun 24 17:11:19.294: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 17:11:19.294: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+Jun 24 17:11:19.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-766262415 patch pod redis-master-vf49g --namespace=kubectl-6522 -p {"metadata":{"annotations":{"x":"y"}}}'
+Jun 24 17:11:19.408: INFO: stderr: ""
+Jun 24 17:11:19.408: INFO: stdout: "pod/redis-master-vf49g patched\n"
+STEP: checking annotations
+Jun 24 17:11:19.412: INFO: Selector matched 1 pods for map[app:redis]
+Jun 24 17:11:19.412: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
+[AfterEach] [sig-cli] Kubectl client
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:11:19.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "kubectl-6522" for this suite.
+Jun 24 17:11:41.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:11:41.510: INFO: namespace kubectl-6522 deletion completed in 22.093152762s
+
+• [SLOW TEST:24.549 seconds]
+[sig-cli] Kubectl client
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
+  [k8s.io] Kubectl patch
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+    should add annotations for pods in rc  [Conformance]
+    /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSS
+------------------------------
+[k8s.io] InitContainer [NodeConformance] 
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:11:41.511: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename init-container
+STEP: Waiting for a default service account to be provisioned in namespace
+[BeforeEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
+[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: creating the pod
+Jun 24 17:11:41.539: INFO: PodSpec: initContainers in spec.initContainers
+Jun 24 17:12:26.771: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2292ce71-96a3-11e9-8bcb-526dc0a539dd", GenerateName:"", Namespace:"init-container-2374", SelfLink:"/api/v1/namespaces/init-container-2374/pods/pod-init-2292ce71-96a3-11e9-8bcb-526dc0a539dd", UID:"22934f42-96a3-11e9-b70d-fa163ef83c94", ResourceVersion:"21056", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696993101, loc:(*time.Location)(0x8a1a0e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"539294377"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wc5nr", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0031b1200), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wc5nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wc5nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wc5nr", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0023f6ff8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"minion", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00259f080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023f7080)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0023f70a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0023f70a8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0023f70ac)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696993101, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696993101, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696993101, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63696993101, loc:(*time.Location)(0x8a1a0e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.1.0.12", PodIP:"10.251.128.5", StartTime:(*v1.Time)(0xc0031c8980), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e8ca10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e8ce70)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://86a0b131993267ad96453dadaa67ad07da8fae275201c3d696d3655e364ce96f"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031c89c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0031c89a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
+[AfterEach] [k8s.io] InitContainer [NodeConformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:12:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "init-container-2374" for this suite.
+Jun 24 17:12:48.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:12:48.872: INFO: namespace init-container-2374 deletion completed in 22.089850305s
+
+• [SLOW TEST:67.361 seconds]
+[k8s.io] InitContainer [NodeConformance]
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:687
+  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSSSSSSSSSSSSSSSSSSSSSSSSSS
+------------------------------
+[sig-storage] Secrets 
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+[BeforeEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:149
+STEP: Creating a kubernetes client
+Jun 24 17:12:48.873: INFO: >>> kubeConfig: /tmp/kubeconfig-766262415
+STEP: Building a namespace api object, basename secrets
+STEP: Waiting for a default service account to be provisioned in namespace
+[It] should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+STEP: Creating secret with name secret-test-4abc9106-96a3-11e9-8bcb-526dc0a539dd
+STEP: Creating a pod to test consume secrets
+Jun 24 17:12:48.932: INFO: Waiting up to 5m0s for pod "pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd" in namespace "secrets-8268" to be "success or failure"
+Jun 24 17:12:48.940: INFO: Pod "pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.089796ms
+Jun 24 17:12:50.944: INFO: Pod "pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012510775s
+STEP: Saw pod success
+Jun 24 17:12:50.945: INFO: Pod "pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd" satisfied condition "success or failure"
+Jun 24 17:12:50.950: INFO: Trying to get logs from node minion pod pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd container secret-volume-test: 
+STEP: delete the pod
+Jun 24 17:12:50.976: INFO: Waiting for pod pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd to disappear
+Jun 24 17:12:50.988: INFO: Pod pod-secrets-4abd1326-96a3-11e9-8bcb-526dc0a539dd no longer exists
+[AfterEach] [sig-storage] Secrets
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
+Jun 24 17:12:50.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
+STEP: Destroying namespace "secrets-8268" for this suite.
+Jun 24 17:12:57.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
+Jun 24 17:12:57.098: INFO: namespace secrets-8268 deletion completed in 6.105188411s
+
+• [SLOW TEST:8.225 seconds]
+[sig-storage] Secrets
+/workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
+  should be consumable from pods in volume [NodeConformance] [Conformance]
+  /workspace/anago-v1.14.3-beta.0.37+5e53fd6bc17c0d/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
+------------------------------
+SSSJun 24 17:12:57.098: INFO: Running AfterSuite actions on all nodes
+Jun 24 17:12:57.098: INFO: Running AfterSuite actions on node 1
+Jun 24 17:12:57.098: INFO: Skipping dumping logs from cluster
+
+Ran 204 of 3585 Specs in 6053.359 seconds
+SUCCESS! -- 204 Passed | 0 Failed | 0 Pending | 3381 Skipped PASS
+
+Ginkgo ran 1 suite in 1h40m55.09247126s
+Test Suite Passed
diff --git a/v1.14/snaps-kubernetes/junit_01.xml b/v1.14/snaps-kubernetes/junit_01.xml
new file mode 100644
index 0000000000..0b9f7a02b5
--- /dev/null
+++ b/v1.14/snaps-kubernetes/junit_01.xml
@@ -0,0 +1,10350 @@
+
+  
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+      
+      
+          
+      
+      
+          
+      
+      
+          
+      
+  
\ No newline at end of file