Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revert "Makefile: scripts: Add build args for proxy when using docker… #387

Conversation

fidencio
Copy link
Member

… build"

This reverts commit 5117129.

Let's configure our CI to use redsocks instead, which theoretically helps us to avoid the whole proxy madness.

… build"

This reverts commit 5117129.

Let's configure our CI to use redsocks instead, which theoretically
helps us to avoid the whole proxy madness.

Signed-off-by: Fabiano Fidêncio <[email protected]>
@fidencio fidencio force-pushed the topic/use-redsocks-in-the-TDX-ci branch 2 times, most recently from 4c135a2 to cee056f Compare June 20, 2024 07:57
@fidencio fidencio marked this pull request as draft June 20, 2024 18:16
@fidencio fidencio force-pushed the topic/use-redsocks-in-the-TDX-ci branch 5 times, most recently from 6e5eaf2 to 6a49631 Compare June 20, 2024 21:18
@fidencio
Copy link
Member Author

$ ./run-local.sh -u
::info:: Bootstrap the local machine

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Install build packages] ****************************************************************************
ok: [localhost]

TASK [Download Go tarball] *******************************************************************************
ok: [localhost]

TASK [Extract Go tarball] ********************************************************************************
changed: [localhost]

TASK [Create link to go binary] **************************************************************************
changed: [localhost]

TASK [Check docker is installed] *************************************************************************
changed: [localhost]

TASK [Install docker dependencies] ***********************************************************************
skipping: [localhost]

TASK [Add docker repo GPG key] ***************************************************************************
skipping: [localhost]

TASK [Add docker repo] ***********************************************************************************
skipping: [localhost]

TASK [Install docker packages] ***************************************************************************
skipping: [localhost]

TASK [Create the docker group] ***************************************************************************
skipping: [localhost]

TASK [Install docker packages] ***************************************************************************
skipping: [localhost]

TASK [Create the docker group] ***************************************************************************
skipping: [localhost]

TASK [Install docker packages] ***************************************************************************
skipping: [localhost]

TASK [Create the docker group] ***************************************************************************
skipping: [localhost]

TASK [Install yum-utils] *********************************************************************************
skipping: [localhost]

TASK [Add docker yum repo] *******************************************************************************
skipping: [localhost]

TASK [Install docker packages] ***************************************************************************
skipping: [localhost]

TASK [Get FragmentPath] **********************************************************************************
skipping: [localhost]

TASK [Copy fragment file to /etc/systemd/system/docker.service.d] ****************************************
skipping: [localhost]

TASK [Check if /etc/systemd/system/docker.service has StartLimitBurst=0] *********************************
skipping: [localhost]

TASK [Replace a value of StartLimitBurst to 0] ***********************************************************
skipping: [localhost]

TASK [Otherwise, insert `StartLimitBurst=0` just after a service section] ********************************
skipping: [localhost]

TASK [Reload systemd] ************************************************************************************
skipping: [localhost]

TASK [Install yum-utils] *********************************************************************************
skipping: [localhost]

TASK [Add docker yum repo] *******************************************************************************
skipping: [localhost]

TASK [Install docker packages] ***************************************************************************
skipping: [localhost]

TASK [Start docker service] ******************************************************************************
ok: [localhost]

TASK [Check qemu-user-static is installed] ***************************************************************
skipping: [localhost]

TASK [Install qemu-user-static] **************************************************************************
skipping: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Install containerd from distro] ********************************************************************
ok: [localhost]

TASK [Re-create containerd config] ***********************************************************************
changed: [localhost]

TASK [Restart containerd service] ************************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Install kubeadm required packages] *****************************************************************
ok: [localhost]

TASK [Create CNI home directory] *************************************************************************
changed: [localhost]

TASK [Download CNI plugins] ******************************************************************************
ok: [localhost]

TASK [Install CNI plugins] *******************************************************************************
changed: [localhost]

TASK [Download crictl] ***********************************************************************************
ok: [localhost]

TASK [Install crictl] ************************************************************************************
changed: [localhost]

TASK [Install kube binaries] *****************************************************************************
changed: [localhost] => (item=kubeadm)
changed: [localhost] => (item=kubelet)
changed: [localhost] => (item=kubectl)

TASK [Remove zram-generator-defaults in Fedora] **********************************************************
skipping: [localhost]

TASK [Disable swap] **************************************************************************************
ok: [localhost]

TASK [Disable swap in fstab] *****************************************************************************
ok: [localhost]

TASK [Create kubelet service] ****************************************************************************
changed: [localhost]

TASK [Create kubelet.service.d directory] ****************************************************************
changed: [localhost]

TASK [Create kubeadm service config] *********************************************************************
changed: [localhost]

TASK [Create kubeadm configuration directory] ************************************************************
changed: [localhost]

TASK [Create kubeadm configuration file] *****************************************************************
changed: [localhost]

TASK [Reload systemd configuration] **********************************************************************
ok: [localhost]

TASK [Start kubelet service] *****************************************************************************
changed: [localhost]

TASK [Create flannel home directory] *********************************************************************
changed: [localhost]

TASK [Create flannel deployment file] ********************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Install python3-docker and python3-requests] *******************************************************
ok: [localhost]

TASK [Start a docker registry] ***************************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Install test dependencies] *************************************************************************
ok: [localhost]

TASK [shell] *********************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "command -v bats >/dev/null 2>&1", "delta": "0:00:00.006332", "end": "2024-06-21 00:48:55.262881", "msg": "non-zero return code", "rc": 127, "start": "2024-06-21 00:48:55.256549", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [Clone bats repository] *****************************************************************************
changed: [localhost]

TASK [Install bats] **************************************************************************************
changed: [localhost]

TASK [Remove bats sources] *******************************************************************************
changed: [localhost]

TASK [Check kustomize is installed] **********************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "command -v kustomize >/dev/null 2>&1", "delta": "0:00:00.006108", "end": "2024-06-21 00:48:59.121956", "msg": "non-zero return code", "rc": 127, "start": "2024-06-21 00:48:59.115848", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
...ignoring

TASK [Install kustomize] *********************************************************************************
changed: [localhost]

TASK [Download Go tarball] *******************************************************************************
ok: [localhost]

TASK [Extract Go tarball] ********************************************************************************
skipping: [localhost]

TASK [Create link to go binary] **************************************************************************
ok: [localhost]

PLAY RECAP ***********************************************************************************************
localhost                  : ok=43   changed=24   unreachable=0    failed=0    skipped=25   rescued=0    ignored=2   

::info:: Bring up the test cluster
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [984fee00befb.jf.intel.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.23.153.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [984fee00befb.jf.intel.com localhost] and IPs [10.23.153.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [984fee00befb.jf.intel.com localhost] and IPs [10.23.153.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.005904 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node 984fee00befb.jf.intel.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node 984fee00befb.jf.intel.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: h24ap9.30xq1ih0xfrqlq3m
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.23.153.131:6443 --token h24ap9.30xq1ih0xfrqlq3m \
	--discovery-token-ca-cert-hash sha256:f5e1dd934e583bc8bce3e6c966fac5bc2315cfc20e4e4644041f5ff301498ed4 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
node/984fee00befb.jf.intel.com untainted
node/984fee00befb.jf.intel.com untainted
node/984fee00befb.jf.intel.com labeled
::info:: Build and install the operator
::debug:: Repo /home/sdp/cc-operator already in git's safe.directory
test -s /home/sdp/cc-operator/bin/controller-gen || GOBIN=/home/sdp/cc-operator/bin go install sigs.k8s.io/controller-tools/cmd/[email protected]
/home/sdp/cc-operator/bin/controller-gen rbac:roleName=manager-role crd webhook paths="./..." output:crd:artifacts:config=config/crd/bases
/home/sdp/cc-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
test -s /home/sdp/cc-operator/bin/setup-envtest || GOBIN=/home/sdp/cc-operator/bin go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
KUBEBUILDER_ASSETS="/home/sdp/.local/share/kubebuilder-envtest/k8s/1.24.2-linux-amd64" go test ./... -coverprofile cover.out
?   	github.com/confidential-containers/operator	[no test files]
?   	github.com/confidential-containers/operator/api/v1beta1	[no test files]
ok  	github.com/confidential-containers/operator/controllers	0.069s	coverage: 0.0% of statements
docker build -t localhost:5000/cc-operator:latest .
[+] Building 48.9s (17/17) FINISHED                                                        docker:default
 => [internal] load build definition from Dockerfile                                                 0.0s
 => => transferring dockerfile: 957B                                                                 0.0s
 => [internal] load .dockerignore                                                                    0.0s
 => => transferring context: 2B                                                                      0.0s
 => [internal] load metadata for gcr.io/distroless/static:nonroot                                    0.5s
 => [internal] load metadata for docker.io/library/golang:1.20                                       0.6s
 => CACHED [stage-1 1/3] FROM gcr.io/distroless/static:nonroot@sha256:e9ac71e2b8e279a8372741b7a0293  0.0s
 => [builder 1/9] FROM docker.io/library/golang:1.20@sha256:8f9af7094d0cb27cc783c697ac5ba25efdc4da3  0.0s
 => [internal] load build context                                                                    0.0s
 => => transferring context: 14.04kB                                                                 0.0s
 => CACHED [builder 2/9] WORKDIR /workspace                                                          0.0s
 => CACHED [builder 3/9] COPY go.mod go.mod                                                          0.0s
 => CACHED [builder 4/9] COPY go.sum go.sum                                                          0.0s
 => [builder 5/9] RUN go mod download                                                                6.5s
 => [builder 6/9] COPY main.go main.go                                                               0.0s
 => [builder 7/9] COPY api/ api/                                                                     0.1s
 => [builder 8/9] COPY controllers/ controllers/                                                     0.3s
 => [builder 9/9] RUN CGO_ENABLED=0 GOOS=linux go build -a -o manager main.go                       40.6s
 => [stage-1 2/3] COPY --from=builder /workspace/manager .                                           0.2s
 => exporting to image                                                                               0.4s
 => => exporting layers                                                                              0.4s
 => => writing image sha256:90f27bee4570c6cd8e4e801d1ec7fec5ffaf365e8bc725e3a547390267e3c676         0.0s
 => => naming to localhost:5000/cc-operator:latest                                                   0.0s
docker push localhost:5000/cc-operator:latest
The push refers to repository [localhost:5000/cc-operator]
ee6753ff7115: Pushed 
b336e209998f: Pushed 
f4aee9e53c42: Pushed 
1a73b54f556b: Pushed 
2a92d6ac9e4f: Pushed 
bbb6cacb8c82: Pushed 
ac805962e479: Pushed 
af5aa97ebe6c: Pushed 
4d049f83d9cf: Pushed 
945d17be9a3e: Pushed 
49626df344c9: Pushed 
3d6fa0469044: Pushed 
latest: digest: sha256:8dfa32006a940c543197cbe3fc52ebda40315bfb49cb19455ded658a2d69f9b7 size: 2814
::debug:: system's containerd version: 1.7.12
namespace/confidential-containers-system created
customresourcedefinition.apiextensions.k8s.io/ccruntimes.confidentialcontainers.org created
serviceaccount/cc-operator-controller-manager created
role.rbac.authorization.k8s.io/cc-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/cc-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/cc-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-proxy-rolebinding created
configmap/cc-operator-manager-config created
service/cc-operator-controller-manager-metrics-service created
deployment.apps/cc-operator-controller-manager created
coco_containerd_version=1.6.8.2 \
official_containerd_version=1.7.7 \
vfio_gpu_containerd_version=1.7.0.0 \
nydus_snapshotter_version=v0.13.13 \
bash -x payload.sh
+ set -o errexit
+ set -o pipefail
+ set -o nounset
+++ readlink -f payload.sh
++ dirname /home/sdp/cc-operator/install/pre-install-payload/payload.sh
+ script_dir=/home/sdp/cc-operator/install/pre-install-payload
+ coco_containerd_repo=https://github.com/confidential-containers/containerd
+ official_containerd_repo=https://github.com/containerd/containerd
+ vfio_gpu_containerd_repo=https://github.com/confidential-containers/containerd
+ nydus_snapshotter_repo=https://github.com/containerd/nydus-snapshotter
++ mktemp -d -t containerd-XXXXXXXXXX
+ containerd_dir=/tmp/containerd-1cJbbb2HkC/containerd
+ extra_docker_manifest_flags=--insecure
+ registry=localhost:5000/reqs-payload
+ supported_arches=("linux/amd64" "linux/s390x")
+ main
+ build_payload
+ pushd /home/sdp/cc-operator/install/pre-install-payload
~/cc-operator/install/pre-install-payload ~/cc-operator/install/pre-install-payload
+ local tag
++ git rev-parse HEAD
+ tag=6a496310b2648d48b3ad23a54a9630ec1ef9791c
+ manifest_args=()
+ for arch in "${supported_arches[@]}"
+ setup_env_for_arch linux/amd64
+ case "$1" in
+ kernel_arch=x86_64
+ golang_arch=amd64
+ echo 'Building containerd payload image for linux/amd64'
Building containerd payload image for linux/amd64
+ docker buildx build --build-arg ARCH=amd64 --build-arg COCO_CONTAINERD_VERSION=1.6.8.2 --build-arg COCO_CONTAINERD_REPO=https://github.com/confidential-containers/containerd --build-arg OFFICIAL_CONTAINERD_VERSION=1.7.7 --build-arg OFFICIAL_CONTAINERD_REPO=https://github.com/containerd/containerd --build-arg VFIO_GPU_CONTAINERD_VERSION=1.7.0.0 --build-arg VFIO_GPU_CONTAINERD_REPO=https://github.com/confidential-containers/containerd --build-arg NYDUS_SNAPSHOTTER_VERSION=v0.13.13 --build-arg NYDUS_SNAPSHOTTER_REPO=https://github.com/containerd/nydus-snapshotter -t localhost:5000/reqs-payload:x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c --platform=linux/amd64 --load .
[+] Building 31.2s (23/23) FINISHED                                                        docker:default
 => [internal] load .dockerignore                                                                    0.0s
 => => transferring context: 2B                                                                      0.0s
 => [internal] load build definition from Dockerfile                                                 0.0s
 => => transferring dockerfile: 4.88kB                                                               0.0s
 => [internal] load metadata for docker.io/library/golang:1.19-alpine                                0.9s
 => [internal] load metadata for docker.io/library/alpine:3.18                                       1.2s
 => [nydus-binary-downloader 1/2] FROM docker.io/library/golang:1.19-alpine@sha256:0ec0646e208ea58  24.5s
 => => resolve docker.io/library/golang:1.19-alpine@sha256:0ec0646e208ea58e5d29e558e39f2e59fccf39b7  0.0s
 => => sha256:0ec0646e208ea58e5d29e558e39f2e59fccf39b7bda306cb53bbaff91919eca5 1.65kB / 1.65kB       0.0s
 => => sha256:276692412aea6f9dd6cdc5725b2f1c05bef8df7223811afbc6aa16294e2903f9 1.16kB / 1.16kB       0.0s
 => => sha256:5da886f1a5e71f657d8205d2b8aec7c3de24070327553634520503326e53ae14 5.18kB / 5.18kB       0.0s
 => => sha256:7264a8db6415046d36d16ba98b79778e18accee6ffa71850405994cffa9be7de 3.40MB / 3.40MB       0.2s
 => => sha256:c4d48a809fc2256f8aa0aeee47998488d64409855adba00a7cb3007ab9f3286e 284.69kB / 284.69kB   0.1s
 => => sha256:e2e938b6148703b9e1e60b0db2cd54d7cfd2bd17c38fdd5b379bc4d4d5ec5285 122.48MB / 122.48MB   1.5s
 => => sha256:7896c2688058484dc40121d0b4e0a6133add530a2091f409987d54e8ffbac5ec 155B / 155B           0.3s
 => => extracting sha256:7264a8db6415046d36d16ba98b79778e18accee6ffa71850405994cffa9be7de            0.3s
 => => extracting sha256:c4d48a809fc2256f8aa0aeee47998488d64409855adba00a7cb3007ab9f3286e            0.1s
 => => extracting sha256:e2e938b6148703b9e1e60b0db2cd54d7cfd2bd17c38fdd5b379bc4d4d5ec5285           21.8s
 => => extracting sha256:7896c2688058484dc40121d0b4e0a6133add530a2091f409987d54e8ffbac5ec            0.0s
 => [internal] load build context                                                                    0.0s
 => => transferring context: 11.20kB                                                                 0.0s
 => [base 1/1] FROM docker.io/library/alpine:3.18@sha256:1875c923b73448b558132e7d4a44b815d078779ed7  0.9s
 => => resolve docker.io/library/alpine:3.18@sha256:1875c923b73448b558132e7d4a44b815d078779ed7a73f7  0.0s
 => => sha256:1875c923b73448b558132e7d4a44b815d078779ed7a73f76209c6372de95ea8d 1.64kB / 1.64kB       0.0s
 => => sha256:d9a39933bee4ccb6d934b7b5632cdf8c42658f3cecc5029681338f397142af6e 528B / 528B           0.0s
 => => sha256:8fd7cac70a4aaabb31d459b03e3534b14341c31429b678b650039ac45a606cfc 1.47kB / 1.47kB       0.0s
 => => sha256:73baa7ef167e70f1c0233fe09e741780d780ea16e78b3c1b6f4216e2afbbd03e 3.41MB / 3.41MB       0.4s
 => => extracting sha256:73baa7ef167e70f1c0233fe09e741780d780ea16e78b3c1b6f4216e2afbbd03e            0.3s
 => [coco-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-install-  7.6s
 => [kubectl-binary-downloader 1/1] RUN  apk --no-cache add curl &&  curl -fL --progress-bar -o /us  6.6s
 => [vfio-gpu-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-inst  8.0s
 => [stage-6  1/10] RUN apk --no-cache add bash gcompat                                              5.3s
 => [official-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-inst  8.0s
 => [stage-6  2/10] COPY --from=coco-containerd-binary-downloader /opt/confidential-containers-pre-  0.2s
 => [stage-6  3/10] COPY --from=official-containerd-binary-downloader /opt/confidential-containers-  0.2s 
 => [stage-6  4/10] COPY --from=vfio-gpu-containerd-binary-downloader /opt/confidential-containers-  0.2s 
 => [nydus-binary-downloader 2/2] RUN mkdir -p /opt/confidential-containers-pre-install-artifacts/o  4.0s 
 => [stage-6  5/10] COPY --from=nydus-binary-downloader /opt/confidential-containers-pre-install-ar  0.2s 
 => [stage-6  6/10] COPY --from=kubectl-binary-downloader /usr/bin/kubectl /usr/bin/kubectl          0.2s 
 => [stage-6  7/10] COPY ./containerd/containerd-for-cc-override.conf /opt/confidential-containers-  0.0s 
 => [stage-6  8/10] COPY ./remote-snapshotter/nydus-snapshotter/nydus-snapshotter.service /opt/conf  0.0s 
 => [stage-6  9/10] COPY ./remote-snapshotter/nydus-snapshotter/config-coco-guest-pulling.toml /opt  0.0s 
 => [stage-6 10/10] COPY ./scripts/* /opt/confidential-containers-pre-install-artifacts/scripts/     0.0s 
 => exporting to image                                                                               0.4s 
 => => exporting layers                                                                              0.4s 
 => => writing image sha256:5a3631e3625d6aeab3e76710f3724ad13888fb2a96f5d48542eaddb6d3d339c1         0.0s 
 => => naming to localhost:5000/reqs-payload:x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c         0.0s 
+ docker push localhost:5000/reqs-payload:x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c                 
The push refers to repository [localhost:5000/reqs-payload]                                               
7c4d5e0d3fb9: Pushed 
2338503b35b8: Pushed 
781580df0b20: Pushed 
af671a8251df: Pushed 
31945c10bfc9: Pushed 
11b4160c84ad: Pushed 
29173137a82c: Pushed 
92e4f6166f12: Pushed 
35c5f1ecacee: Pushed 
aa85c9a7e05e: Pushed 
63ec0bd56cf3: Pushed 
x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c: digest: sha256:14dd2f85ea80b3eb6cfc247d8d4fb0c763d9c1eefb10e959a6ad2ba400d6baa1 size: 2627
+ manifest_args+=(--amend "${registry}:${kernel_arch##*/}-${tag}")
+ for arch in "${supported_arches[@]}"
+ setup_env_for_arch linux/s390x
+ case "$1" in
+ kernel_arch=s390x
+ golang_arch=s390x
+ echo 'Building containerd payload image for linux/s390x'
Building containerd payload image for linux/s390x
+ docker buildx build --build-arg ARCH=s390x --build-arg COCO_CONTAINERD_VERSION=1.6.8.2 --build-arg COCO_CONTAINERD_REPO=https://github.com/confidential-containers/containerd --build-arg OFFICIAL_CONTAINERD_VERSION=1.7.7 --build-arg OFFICIAL_CONTAINERD_REPO=https://github.com/containerd/containerd --build-arg VFIO_GPU_CONTAINERD_VERSION=1.7.0.0 --build-arg VFIO_GPU_CONTAINERD_REPO=https://github.com/confidential-containers/containerd --build-arg NYDUS_SNAPSHOTTER_VERSION=v0.13.13 --build-arg NYDUS_SNAPSHOTTER_REPO=https://github.com/containerd/nydus-snapshotter -t localhost:5000/reqs-payload:s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c --platform=linux/s390x --load .
[+] Building 38.3s (23/23) FINISHED                                                        docker:default
 => [internal] load .dockerignore                                                                    0.0s
 => => transferring context: 2B                                                                      0.0s
 => [internal] load build definition from Dockerfile                                                 0.0s
 => => transferring dockerfile: 4.88kB                                                               0.0s
 => [internal] load metadata for docker.io/library/golang:1.19-alpine                                0.7s
 => [internal] load metadata for docker.io/library/alpine:3.18                                       0.7s
 => [internal] load build context                                                                    0.0s
 => => transferring context: 510B                                                                    0.0s
 => [base 1/1] FROM docker.io/library/alpine:3.18@sha256:1875c923b73448b558132e7d4a44b815d078779ed7  0.7s
 => => resolve docker.io/library/alpine:3.18@sha256:1875c923b73448b558132e7d4a44b815d078779ed7a73f7  0.0s
 => => sha256:606d0994dba8dcb8d96bdbc4d2fa6aab25b2558dc000b820766733dc7d4534a6 1.47kB / 1.47kB       0.0s
 => => sha256:784fb6b11ccf355c65e296bcf7ef3623ff36c33ac16292b79440c32ec3699700 3.23MB / 3.23MB       0.2s
 => => sha256:1875c923b73448b558132e7d4a44b815d078779ed7a73f76209c6372de95ea8d 1.64kB / 1.64kB       0.0s
 => => sha256:c188397a7c757726752001ff56a1c6abafa66131d7509d80b8c49bcdde8aabcc 528B / 528B           0.0s
 => => extracting sha256:784fb6b11ccf355c65e296bcf7ef3623ff36c33ac16292b79440c32ec3699700            0.3s
 => [nydus-binary-downloader 1/2] FROM docker.io/library/golang:1.19-alpine@sha256:0ec0646e208ea58  25.5s
 => => resolve docker.io/library/golang:1.19-alpine@sha256:0ec0646e208ea58e5d29e558e39f2e59fccf39b7  0.0s
 => => sha256:0ec0646e208ea58e5d29e558e39f2e59fccf39b7bda306cb53bbaff91919eca5 1.65kB / 1.65kB       0.0s
 => => sha256:7b47cff2c98995690cd81fc6d18c0f7b726bf92d375268791e8bcd42553f7862 1.16kB / 1.16kB       0.0s
 => => sha256:8ece8eb6ca448b9da6d6ac6a8ba791f0a5ad3ffb4feb9bf0ef0932da14896346 5.18kB / 5.18kB       0.0s
 => => sha256:8bed2eae372fe236061920d89ae1ce89695a12df84989113bcc7ce4bd9774456 3.21MB / 3.21MB       0.3s
 => => sha256:ec90be18226e5c99d10161aed1a143f4134093c55b4d6979bbdbbe4b0683eb11 285.09kB / 285.09kB   0.4s
 => => sha256:8a62238188658a20afe263e4174c269ab91e68e86bf02db75dac3bbdacfe253c 120.93MB / 120.93MB   2.3s
 => => extracting sha256:8bed2eae372fe236061920d89ae1ce89695a12df84989113bcc7ce4bd9774456            0.3s
 => => sha256:3ac2bd91a23c4e119cd9dc376dcfc3d3d5d128d7f9723eb0d4b575e3d810f15b 156B / 156B           0.5s
 => => extracting sha256:ec90be18226e5c99d10161aed1a143f4134093c55b4d6979bbdbbe4b0683eb11            0.1s
 => => extracting sha256:8a62238188658a20afe263e4174c269ab91e68e86bf02db75dac3bbdacfe253c           22.0s
 => => extracting sha256:3ac2bd91a23c4e119cd9dc376dcfc3d3d5d128d7f9723eb0d4b575e3d810f15b            0.0s
 => [coco-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-install  17.7s
 => [vfio-gpu-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-ins  17.7s
 => [stage-6  1/10] RUN apk --no-cache add bash gcompat                                              8.8s
 => [kubectl-binary-downloader 1/1] RUN  apk --no-cache add curl &&  curl -fL --progress-bar -o /u  14.7s
 => [official-containerd-binary-downloader 1/1] RUN  mkdir -p /opt/confidential-containers-pre-ins  20.0s
 => [stage-6  2/10] COPY --from=coco-containerd-binary-downloader /opt/confidential-containers-pre-  0.2s
 => [stage-6  3/10] COPY --from=official-containerd-binary-downloader /opt/confidential-containers-  0.2s 
 => [stage-6  4/10] COPY --from=vfio-gpu-containerd-binary-downloader /opt/confidential-containers-  0.2s 
 => [nydus-binary-downloader 2/2] RUN mkdir -p /opt/confidential-containers-pre-install-artifacts/  10.7s 
 => [stage-6  5/10] COPY --from=nydus-binary-downloader /opt/confidential-containers-pre-install-ar  0.2s 
 => [stage-6  6/10] COPY --from=kubectl-binary-downloader /usr/bin/kubectl /usr/bin/kubectl          0.2s 
 => [stage-6  7/10] COPY ./containerd/containerd-for-cc-override.conf /opt/confidential-containers-  0.0s 
 => [stage-6  8/10] COPY ./remote-snapshotter/nydus-snapshotter/nydus-snapshotter.service /opt/conf  0.0s 
 => [stage-6  9/10] COPY ./remote-snapshotter/nydus-snapshotter/config-coco-guest-pulling.toml /opt  0.0s 
 => [stage-6 10/10] COPY ./scripts/* /opt/confidential-containers-pre-install-artifacts/scripts/     0.0s 
 => exporting to image                                                                               0.4s 
 => => exporting layers                                                                              0.4s 
 => => writing image sha256:ca45e42e9fb193a85c606f89c6d101597b4e974156a5170b40ef5e715e1f04fc         0.0s 
 => => naming to localhost:5000/reqs-payload:s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c          0.0s 
+ docker push localhost:5000/reqs-payload:s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c                  
The push refers to repository [localhost:5000/reqs-payload]                                               
2e16dbd547d6: Pushed 
92acfc7a741d: Pushed 
30a81a02436f: Pushed 
ee43530cefd4: Pushed 
5694569416c3: Pushed 
0a9195626364: Pushed 
b6e41598f38b: Pushed 
c04de0d01492: Pushed 
3985b1badfd8: Pushed 
2b0cb965c8b6: Pushed 
f645e7050c9d: Pushed 
s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c: digest: sha256:6b7ebd23c6e16878e8cc7061e038b476a66dc13d580699a81bd2d342acb66845 size: 2627
+ manifest_args+=(--amend "${registry}:${kernel_arch##*/}-${tag}")
+ purge_previous_manifests localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c
+ local manifest
+ local sanitised_manifest
+ manifest=localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c
++ echo localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c
++ sed 's|/|_|g'
++ sed 's|:|-|g'
+ sanitised_manifest=localhost-5000_reqs-payload-6a496310b2648d48b3ad23a54a9630ec1ef9791c
+ rm -rf /home/sdp/.docker/manifests/localhost-5000_reqs-payload-6a496310b2648d48b3ad23a54a9630ec1ef9791c
+ purge_previous_manifests localhost:5000/reqs-payload:latest
+ local manifest
+ local sanitised_manifest
+ manifest=localhost:5000/reqs-payload:latest
++ echo localhost:5000/reqs-payload:latest
++ sed 's|/|_|g'
++ sed 's|:|-|g'
+ sanitised_manifest=localhost-5000_reqs-payload-latest
+ rm -rf /home/sdp/.docker/manifests/localhost-5000_reqs-payload-latest
+ docker manifest create --insecure localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c --amend localhost:5000/reqs-payload:x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c --amend localhost:5000/reqs-payload:s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c
Created manifest list localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c
+ docker manifest create --insecure localhost:5000/reqs-payload:latest --amend localhost:5000/reqs-payload:x86_64-6a496310b2648d48b3ad23a54a9630ec1ef9791c --amend localhost:5000/reqs-payload:s390x-6a496310b2648d48b3ad23a54a9630ec1ef9791c
Created manifest list localhost:5000/reqs-payload:latest
+ docker manifest push --insecure localhost:5000/reqs-payload:6a496310b2648d48b3ad23a54a9630ec1ef9791c
sha256:11ac8ddf52158399c098e27d2591d5d982cdf228c705d12c03e3dfdcc957dd94
+ docker manifest push --insecure localhost:5000/reqs-payload:latest
sha256:11ac8ddf52158399c098e27d2591d5d982cdf228c705d12c03e3dfdcc957dd94
+ popd
~/cc-operator/install/pre-install-payload
ccruntime.confidentialcontainers.org/ccruntime-sample created
::debug:: Pod: cc-operator-controller-manager-ccbbcfdf7-n7vxl, Container: kube-rbac-proxy, Restart count: 0
::debug:: Pod: manager, Container: 0, Restart count: 
::debug:: Pod: cc-operator-daemon-install-9mjjp, Container: cc-runtime-install-pod, Restart count: 0
::debug:: Pod: cc-operator-pre-install-daemon-8rtgd, Container: cc-runtime-pre-install-pod, Restart count: 0
::info:: No new restarts in 3x21s, proceeding...
::info:: Run tests
INFO: Running operator tests for kata-qemu
operator_tests.bats
 ✓ [cc][operator] Test can uninstall the operator
 ✓ [cc][operator] Test can reinstall the operator

2 tests, 0 failures

::info:: Uninstall the operator
ccruntime.confidentialcontainers.org "ccruntime-sample" deleted
::error:: there are ccruntime pods still running
::group::Describe pods from confidential-containers-system namespace
Name:         cc-operator-controller-manager-ccbbcfdf7-vt8x6
Namespace:    confidential-containers-system
Priority:     0
Node:         984fee00befb.jf.intel.com/10.23.153.131
Start Time:   Fri, 21 Jun 2024 01:00:21 -0700
Labels:       control-plane=controller-manager
              pod-template-hash=ccbbcfdf7
Annotations:  <none>
Status:       Running
IP:           10.244.0.9
IPs:
  IP:           10.244.0.9
Controlled By:  ReplicaSet/cc-operator-controller-manager-ccbbcfdf7
Containers:
  kube-rbac-proxy:
    Container ID:  containerd://ca45fac7d77d8c16f95785de79fefa8d04bf0160ee6b5ed2a6f7fd993a98bcc8
    Image:         gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1
    Image ID:      gcr.io/kubebuilder/kube-rbac-proxy@sha256:d4883d7c622683b3319b5e6b3a7edfbf2594c18060131a8bf64504805f875522
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Fri, 21 Jun 2024 01:00:22 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvbld (ro)
  manager:
    Container ID:  containerd://6614be18fb7da7803fe6e36f1bfdf487430bae392fce7e22713b2e154f40adf4
    Image:         localhost:5000/cc-operator:latest
    Image ID:      localhost:5000/cc-operator@sha256:8dfa32006a940c543197cbe3fc52ebda40315bfb49cb19455ded658a2d69f9b7
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
    State:          Running
      Started:      Fri, 21 Jun 2024 01:00:22 -0700
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  100Mi
    Requests:
      cpu:      100m
      memory:   20Mi
    Liveness:   http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:  http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      CCRUNTIME_NAMESPACE:  confidential-containers-system (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zvbld (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-zvbld:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  12m   default-scheduler  Successfully assigned confidential-containers-system/cc-operator-controller-manager-ccbbcfdf7-vt8x6 to 984fee00befb.jf.intel.com
  Normal   Pulled     12m   kubelet            Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.13.1" already present on machine
  Normal   Created    12m   kubelet            Created container kube-rbac-proxy
  Normal   Started    12m   kubelet            Started container kube-rbac-proxy
  Normal   Pulling    12m   kubelet            Pulling image "localhost:5000/cc-operator:latest"
  Normal   Pulled     12m   kubelet            Successfully pulled image "localhost:5000/cc-operator:latest" in 33.701652ms
  Normal   Created    12m   kubelet            Created container manager
  Normal   Started    12m   kubelet            Started container manager
  Warning  Unhealthy  12m   kubelet            Liveness probe failed: Get "http://10.244.0.9:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  12m   kubelet            Readiness probe failed: Get "http://10.244.0.9:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)


Name:         cc-operator-pre-install-daemon-hpjsq
Namespace:    confidential-containers-system
Priority:     0
Node:         984fee00befb.jf.intel.com/10.23.153.131
Start Time:   Fri, 21 Jun 2024 01:00:36 -0700
Labels:       controller-revision-hash=84f7757fb
              name=cc-operator-pre-install-daemon
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.244.0.10
IPs:
  IP:           10.244.0.10
Controlled By:  DaemonSet/cc-operator-pre-install-daemon
Containers:
  cc-runtime-pre-install-pod:
    Container ID:  containerd://99e6d5922134b25a7e4bdda520c631f5543ca249624d4b7a02a63f04b533fe30
    Image:         localhost:5000/reqs-payload
    Image ID:      localhost:5000/reqs-payload@sha256:11ac8ddf52158399c098e27d2591d5d982cdf228c705d12c03e3dfdcc957dd94
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      /opt/confidential-containers-pre-install-artifacts/scripts/pre-install.sh
    State:          Running
      Started:      Fri, 21 Jun 2024 01:00:37 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      NODE_NAME:                     (v1:spec.nodeName)
      INSTALL_OFFICIAL_CONTAINERD:  false
    Mounts:
      /etc/containerd/ from containerd-conf (rw)
      /etc/systemd/system/ from etc-systemd-system (rw)
      /opt/confidential-containers/ from confidential-containers-artifacts (rw)
      /usr/local/bin/ from local-bin (rw)
      /var/lib/containerd-nydus/ from containerd-nydus (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lszvl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  confidential-containers-artifacts:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/confidential-containers/
    HostPathType:  DirectoryOrCreate
  etc-systemd-system:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/systemd/system/
    HostPathType:  
  containerd-conf:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/containerd/
    HostPathType:  
  local-bin:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/local/bin/
    HostPathType:  
  containerd-nydus:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/containerd-nydus/
    HostPathType:  
  kube-api-access-lszvl:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              node.kubernetes.io/worker=
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned confidential-containers-system/cc-operator-pre-install-daemon-hpjsq to 984fee00befb.jf.intel.com
  Normal  Pulling    12m   kubelet            Pulling image "localhost:5000/reqs-payload"
  Normal  Pulled     12m   kubelet            Successfully pulled image "localhost:5000/reqs-payload" in 51.967048ms
  Normal  Created    12m   kubelet            Created container cc-runtime-pre-install-pod
  Normal  Started    12m   kubelet            Started container cc-runtime-pre-install-pod
::endgroup::
::info:: Shutdown the cluster
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
::info:: Undo the bootstrap

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Uninstall go] **************************************************************************************
changed: [localhost] => (item=/usr/local/bin/go)
changed: [localhost] => (item=/usr/local/go)

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Check containerd is installed] *********************************************************************
changed: [localhost]

TASK [Re-create containerd config] ***********************************************************************
changed: [localhost]

TASK [Restart containerd service] ************************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Uninstall flannel] *********************************************************************************
changed: [localhost]

TASK [Check kubelet is installed] ************************************************************************
changed: [localhost]

TASK [Stop kubelet service] ******************************************************************************
ok: [localhost]

TASK [Delete kubelet service files] **********************************************************************
changed: [localhost] => (item=/etc/systemd/system/kubelet.service)
changed: [localhost] => (item=/etc/systemd/system/kubelet.service.d)

TASK [Delete the kubeadm configuration directory] ********************************************************
changed: [localhost]

TASK [Remove kube binaries] ******************************************************************************
changed: [localhost] => (item=crictl)
changed: [localhost] => (item=kubeadm)
changed: [localhost] => (item=kubectl)
changed: [localhost] => (item=kubelet)

TASK [Uninstall cni] *************************************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Remove the docker registry] ************************************************************************
changed: [localhost]

PLAY [all] ***********************************************************************************************

TASK [Gathering Facts] ***********************************************************************************
ok: [localhost]

TASK [Uninstall go] **************************************************************************************
ok: [localhost] => (item=/usr/local/bin/go)
ok: [localhost] => (item=/usr/local/go)
changed: [localhost] => (item=/usr/local/bin/bats)
changed: [localhost] => (item=/usr/local/bin/kustomize)

PLAY RECAP ***********************************************************************************************
localhost                  : ok=18   changed=12   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

::info:: Testing passed

fidencio added 3 commits June 21, 2024 10:24
TDX machine is using Ubuntu 24.04, so we need to expand the current
check.

Signed-off-by: Fabiano Fidêncio <[email protected]>
Let's switch the logic, and ensure that 24.04 also installs containerd
from the distro.

Signed-off-by: Fabiano Fidêncio <[email protected]>
There's no need to use PIP, at all, as we can rely on the packages with
the correct versions coming from the distro.

This is Ubuntu specific, but it doesn't add a new technical debt, it
just keeps the same technical debt we already had.

Signed-off-by: Fabiano Fidêncio <[email protected]>
@fidencio fidencio force-pushed the topic/use-redsocks-in-the-TDX-ci branch from 6a49631 to c52d54b Compare June 21, 2024 08:25
@fidencio fidencio requested a review from stevenhorsman June 21, 2024 08:31
@fidencio
Copy link
Member Author

Got a green run, now I'm re-running the tests again to make sure we've covered the leftover issues as well.

@fidencio fidencio marked this pull request as ready for review June 21, 2024 09:18
@fidencio
Copy link
Member Author

Wow!

fatal: [localhost]: FAILED! => {"attempts": 3, "changed": true, "cmd": "curl -s --retry 3 --retry-delay 10 \"[https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh\](https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh/)"  | bash\ncp -f ./kustomize /usr/local/bin\n", "delta": "0:00:00.284000", "end": "2024-06-21 02:19:13.502888", "msg": "non-zero return code", "rc": 1, "start": "2024-06-21 02:19:13.218888", "stderr": "cp: cannot stat './kustomize': No such file or directory", "stderr_lines": ["cp: cannot stat './kustomize': No such file or directory"], "stdout": "Github rate-limiter failed the request. Either authenticate or wait a couple of minutes.", "stdout_lines": ["Github rate-limiter failed the request. Either authenticate or wait a couple of minutes."]}

This is something I was not actually expecting to hit. Regardless, this should not be unique to the TDX machine and should not be a blocker for this one to get merged.

@ldoktor
Copy link
Contributor

ldoktor commented Jun 21, 2024

Copy link
Contributor

@ldoktor ldoktor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, let's wait for the CI to pass, though

- requests<2.32
# This is Ubuntu specific
- python3-docker
- python3-requests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My concern here is that due to a breaking change in requests 2.32, we had to have a version less than that (but great than 2.28 IIRC?

What version is python3-requests installing on the 3 different versions of Ubuntu we are supporting and are we at risk of falling into incompatibility again if there are distro updates, or do we think it is safe to rely on canonical to ensure that doesn't happen?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very good question and I'd like to challenge that it doesn't work with something that's not greater than 2.28.

Let's take a look of what we have in our CIs:

  • azure -> Ubuntu 20.04 | python3-requests 2.22.0 -> CI passes with this PR
  • azure -> Ubuntu 22.04 | python3-requests 2.25.1 -> CI passes with this PR
  • TDX -> Ubuntu 24.04 | python3-request 2.31.0 -> CI passes with this PR
  • s390x -> ? -> CI passes with this PR

When using PIP, the first thing that I've faced was:

fatal: [localhost]: FAILED! => {"attempts": 3, "changed": false, "cmd": ["/usr/bin/python3", "-m", "pip.__main__", "install", "--user", "docker", "requests<2.32"], "msg": "\n:stderr: error: externally-managed-environment\n\n× This environment is externally managed\n╰─> To install Python packages system-wide, try apt install\n    python3-xyz, where xyz is the package you are trying to\n    install.\n    \n    If you wish to install a non-Debian-packaged Python package,\n    create a virtual environment using python3 -m venv path/to/venv.\n    Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make\n    sure you have python3-full installed.\n    \n    If you wish to install a non-Debian packaged Python application,\n    it may be easiest to use pipx install xyz, which will manage a\n    virtual environment for you. Make sure you have pipx installed.\n    \n    See /usr/share/doc/python3.12/README.venv for more information.\n\nnote: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.\nhint: See PEP 668 for the detailed specification.\n"}

So, being totally honest, I'd rather deal with the decision later on on whether using pip and facing the risk to break the system packages, or dealing with an update that may break ubuntu (and then, it seems TDX would be the first one receiving such update and it'd fall on me / Intel to solve it).

Let me know if you're comfortable enough with this, Steve.

Copy link
Member

@stevenhorsman stevenhorsman Jun 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, it might be that pre-2.28 doesn't work the latest python-docker and the distros have old docker package that works, so if the tests pass I'm okay to carry this for now.

For reference - this was the error that Choi was seeing with it:

The pinned version of requests (2.28.1) is no longer compatible with the current
docker pip, resulting in errors like:

"Error connecting: Error while fetching server API version: Not supported URL scheme http+docker"

@fidencio
Copy link
Member Author

Merging after double-checking with @stevenhorsman that he's okay with this approach, at least for now.

@fidencio fidencio merged commit ec6d839 into confidential-containers:main Jun 21, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

TDX machine has been setup, but tests are not working due to the proxies wonderness
3 participants