Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[container-runtime=cri-o] kube-proxy fails with apply caps: operation not permitted [both CentOS8 and MacOS] #13742

Closed
zjgemi opened this issue Mar 2, 2022 · 18 comments
Labels
co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@zjgemi
Copy link

zjgemi commented Mar 2, 2022

What Happened?

Kube-proxy always reports container_linux.go:380: starting container process caused: apply caps: operation not permitted when I use cri-o as the container runtime, irrespective of CentOS 8 or Mac OS used.

On CentOS 8:
uname -a

Linux master 4.18.0-365.el8.x86_64 #1 SMP Thu Feb 10 16:11:23 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

systemctl --version

systemd 239 (239-58.el8)
+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy

minikube start --container-runtime=cri-o --alsologtostderr
centos8_start.log
kubectl get pods -n kube-system

NAME                               READY   STATUS                 RESTARTS       AGE
coredns-64897985d-dzxmg            0/1     Pending                0              7m14s
etcd-minikube                      1/1     Running                0              7m21s
kindnet-kngq4                      1/1     Running                2 (111s ago)   7m14s
kube-apiserver-minikube            1/1     Running                0              7m31s
kube-controller-manager-minikube   1/1     Running                0              7m31s
kube-proxy-xf5c5                   0/1     CreateContainerError   0              7m14s
kube-scheduler-minikube            1/1     Running                0              7m31s
storage-provisioner                0/1     Pending                0              7m22s

kubectl describe pod kube-proxy-xf5c5 -n kube-system

Name:                 kube-proxy-xf5c5
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Wed, 02 Mar 2022 02:46:18 -0500
Labels:               controller-revision-hash=5584f47d6d
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  
    Image:         k8s.gcr.io/kube-proxy:v1.23.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l7bfz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-l7bfz:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  7m56s                   default-scheduler  Successfully assigned kube-system/kube-proxy-xf5c5 to minikube
  Warning  Failed     7m55s                   kubelet            Error: container create failed: time="2022-03-02T07:46:19Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     7m54s                   kubelet            Error: container create failed: time="2022-03-02T07:46:20Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     7m40s                   kubelet            Error: container create failed: time="2022-03-02T07:46:34Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     7m28s                   kubelet            Error: container create failed: time="2022-03-02T07:46:46Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     7m17s                   kubelet            Error: container create failed: time="2022-03-02T07:46:57Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     7m3s                    kubelet            Error: container create failed: time="2022-03-02T07:47:11Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     6m50s                   kubelet            Error: container create failed: time="2022-03-02T07:47:24Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     6m36s                   kubelet            Error: container create failed: time="2022-03-02T07:47:38Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     6m21s                   kubelet            Error: container create failed: time="2022-03-02T07:47:53Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     5m44s (x3 over 6m9s)    kubelet            (combined from similar events): Error: container create failed: time="2022-03-02T07:48:30Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Normal   Pulled     2m52s (x24 over 7m56s)  kubelet            Container image "k8s.gcr.io/kube-proxy:v1.23.3" already present on machine

On Mac OS:
minikube start --container-runtime=cri-o --alsologtostderr
macos_start.log
kubectl get pods -n kube-system

NAME                               READY   STATUS                 RESTARTS      AGE
coredns-78fcd69978-vhdj4           0/1     ContainerCreating      0             44s
etcd-minikube                      1/1     Running                0             51s
kindnet-977g2                      1/1     Running                0             43s
kube-apiserver-minikube            1/1     Running                0             51s
kube-controller-manager-minikube   1/1     Running                0             51s
kube-proxy-9nfsm                   0/1     CreateContainerError   0             43s
kube-scheduler-minikube            1/1     Running                0             51s
storage-provisioner                1/1     Running                2 (22s ago)   55s

kubectl describe pod kube-proxy-9nfsm -n kube-system

Name:                 kube-proxy-9nfsm
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 minikube/192.168.49.2
Start Time:           Wed, 02 Mar 2022 16:05:35 +0800
Labels:               controller-revision-hash=674d79d6f9
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:  
    Image:         k8s.gcr.io/kube-proxy:v1.22.3
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hvglr (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:  
  kube-api-access-hvglr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age               From               Message
  ----     ------     ----              ----               -------
  Normal   Scheduled  67s               default-scheduler  Successfully assigned kube-system/kube-proxy-9nfsm to minikube
  Warning  Failed     67s               kubelet            Error: container create failed: time="2022-03-02T08:05:35Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     66s               kubelet            Error: container create failed: time="2022-03-02T08:05:36Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     54s               kubelet            Error: container create failed: time="2022-03-02T08:05:48Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     43s               kubelet            Error: container create failed: time="2022-03-02T08:05:59Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     31s               kubelet            Error: container create failed: time="2022-03-02T08:06:11Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     17s               kubelet            Error: container create failed: time="2022-03-02T08:06:25Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Normal   Pulled     5s (x7 over 67s)  kubelet            Container image "k8s.gcr.io/kube-proxy:v1.22.3" already present on machine
  Warning  Failed     5s                kubelet            Error: container create failed: time="2022-03-02T08:06:37Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"

Attach the log file

On CentOS 8:
minikube logs --file=log.txt
log_centos8.txt

On MacOS:
minikube logs --file=log.txt
log_macos.txt

Operating System

macOS (Default)

Driver

Docker

@RA489
Copy link

RA489 commented Mar 3, 2022

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Mar 3, 2022
@klaases
Copy link
Contributor

klaases commented Apr 6, 2022

hi @zjgemi, just want to check in, is this issue still occurring? Where you able to find any workarounds that seemed to get things moving?

@klaases
Copy link
Contributor

klaases commented May 18, 2022

Hi @zjgemi – is this issue still occurring? Are additional details available? If so, please feel free to re-open the issue by commenting with /reopen. This issue will be closed as additional information was unavailable and some time has passed.

Additional information that may be helpful:

  • Whether the issue occurs with the latest minikube release

  • The exact minikube start command line used

  • Attach the full output of minikube logs, run minikube logs --file=logs.txt to create a log file

Thank you for sharing your experience!

@klaases klaases closed this as completed May 18, 2022
@MrZLeo
Copy link

MrZLeo commented Nov 27, 2022

It happened to me when I use minikube in fedora34:

OS version

❯ uname -a
Linux ip-172-31-23-162.ec2.internal 5.11.12-300.fc34.x86_64 #1 SMP Wed Apr 7 16:31:13 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

minikube version

❯ minikube version
minikube version: v1.28.0
commit: 986b1ebd987211ed16f8cc10aed7d2c42fc8392f

Reproduce

first delete all pods:

❯ minikube delete --all

🔥  Deleting "minikube" in docker ...
🔥  Removing /home/fedora/.minikube/machines/minikube ...
💀  Removed all traces of the "minikube" cluster.
🔥  Successfully deleted all profiles

then try to start

❯ minikube start --container-runtime=cri-o --extra-config=kubelet.cgroup-driver=systemd
😄  minikube v1.28.0 on Fedora 34
✨  Automatically selected the docker driver. Other choices: none, ssh
📌  Using Docker driver with root privileges
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=3800MB) ...
🎁  Preparing Kubernetes v1.25.3 on CRI-O 1.24.3 ...
    ▪ kubelet.cgroup-driver=systemd
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

But when I check pods, it failed:

❯ kubectl get po -A
NAMESPACE     NAME                               READY   STATUS                 RESTARTS      AGE
kube-system   coredns-565d847f94-7t5gm           0/1     ContainerCreating      0             94s
kube-system   etcd-minikube                      1/1     Running                0             108s
kube-system   kindnet-mcsdv                      1/1     Running                0             94s
kube-system   kube-apiserver-minikube            1/1     Running                0             108s
kube-system   kube-controller-manager-minikube   1/1     Running                0             107s
kube-system   kube-proxy-7w4wg                   0/1     CreateContainerError   0             94s
kube-system   kube-scheduler-minikube            1/1     Running                0             108s
kube-system   storage-provisioner                1/1     Running                2 (33s ago)   104s

information about pod:

❯ kubectl describe pod kube-proxy-7w4wg -n kube-system
Name:                 kube-proxy-7w4wg
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      kube-proxy
Node:                 minikube/192.168.49.2
Start Time:           Sun, 27 Nov 2022 03:48:22 +0000
Labels:               controller-revision-hash=b9c5d5dc4
                      k8s-app=kube-proxy
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.49.2
IPs:
  IP:           192.168.49.2
Controlled By:  DaemonSet/kube-proxy
Containers:
  kube-proxy:
    Container ID:
    Image:         registry.k8s.io/kube-proxy:v1.25.3
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-proxy
      --config=/var/lib/kube-proxy/config.conf
      --hostname-override=$(NODE_NAME)
    State:          Waiting
      Reason:       CreateContainerError
    Ready:          False
    Restart Count:  0
    Environment:
      NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /var/lib/kube-proxy from kube-proxy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wqqlc (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-proxy:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-proxy
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  kube-api-access-wqqlc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  3m18s                 default-scheduler  Successfully assigned kube-system/kube-proxy-7w4wg to minikube
  Warning  Failed     3m18s                 kubelet            Error: container create failed: time="2022-11-27T03:48:22Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     3m17s                 kubelet            Error: container create failed: time="2022-11-27T03:48:23Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     3m4s                  kubelet            Error: container create failed: time="2022-11-27T03:48:35Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m53s                 kubelet            Error: container create failed: time="2022-11-27T03:48:47Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m40s                 kubelet            Error: container create failed: time="2022-11-27T03:49:00Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m26s                 kubelet            Error: container create failed: time="2022-11-27T03:49:14Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     2m12s                 kubelet            Error: container create failed: time="2022-11-27T03:49:28Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     118s                  kubelet            Error: container create failed: time="2022-11-27T03:49:41Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     108s                  kubelet            Error: container create failed: time="2022-11-27T03:49:52Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Warning  Failed     73s (x3 over 97s)     kubelet            (combined from similar events): Error: container create failed: time="2022-11-27T03:50:27Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
  Normal   Pulled     61s (x13 over 3m18s)  kubelet            Container image "registry.k8s.io/kube-proxy:v1.25.3" already present on machine

log is attached.
logs.txt

@MrZLeo
Copy link

MrZLeo commented Nov 27, 2022

/reopen

@k8s-ci-robot
Copy link
Contributor

@MrZLeo: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@spowelljr spowelljr reopened this May 11, 2023
@spowelljr spowelljr added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. co/runtime/crio CRIO related issues and removed kind/support Categorizes issue or PR as a support question. labels May 11, 2023
@spowelljr
Copy link
Member

Can confirm this is still occuring

@mqasimsarfraz
Copy link
Contributor

mqasimsarfraz commented May 22, 2023

@spowelljr which docker version are you using? cri-o seems to be working fine after I upgrade to docker version > v23.0.0. If it works fine for you as well, makes sense to suggest user to update docker in case they want to run cri-o. What do you say?

@spowelljr
Copy link
Member

@spowelljr which docker version are you using? cri-o seems to be working fine after I upgrade to docker version > v23.0.0. If it works fine for you as well, makes sense to suggest user to update docker in case they want to run cri-o. What do you say?

I'm still experiencing this on macOS with Docker 23.0.5

@mqasimsarfraz
Copy link
Contributor

I'm still experiencing this on macOS with Docker 23.0.5

I am testing on Ubuntu 22.04. I wonder if that might be related to kernel version. Does following works fine for you:

docker run --cap-add CAP_BPF hello-world

It use to fail for me before upgrade but now it works fine. Should give us some hint!

@spowelljr
Copy link
Member

I still seem to be getting the error

docker: Error response from daemon: invalid CapAdd: capability not supported by your kernel or not available in the current environment: "CAP_BPF".
See 'docker run --help'.

@mqasimsarfraz
Copy link
Contributor

mqasimsarfraz commented May 24, 2023

Sadly, it seems we need to support these capabilities at kernel, docker and runc level. In my case, kernel already had them so upgrading docker fixed it for me but it seems MacOS (linuxkit) doesn't support them. I can think of couple of ways forward:

  • Open an issue in linuxkit to see why we don't have an updated list of capabilities. (I can see a list here but not sure how it is used.)
  • Open an issue in cri-o and check if they can handle capabilities in backward compatible way, I see containerd already doing this.

Please let me know if above makes sense or there is anything else we can try?

@mqasimsarfraz
Copy link
Contributor

@spowelljr I figured out that indeed docker service running in linuxkit doesn't support the capabilities. I have open an issue in docker/for-mac#6883. Hopefully, the issue will be resolved soon. Not sure if we should think about a workaround meanwhile.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 22, 2024
@mqasimsarfraz
Copy link
Contributor

It seems the upstream bug has been fixed: docker/for-mac#6883. I don't have the mac to try this out. Perhaps if it works fine on mac we can close this?

cc: @spowelljr

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 23, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/runtime/crio CRIO related issues kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

8 participants