Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube tunnel should check if port is unprivileged before asking for sudo password #15721

Closed
idelsink opened this issue Jan 27, 2023 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@idelsink
Copy link

idelsink commented Jan 27, 2023

What Happened?

On my system, I have enabled any user to bind to ports under 1024 (starting from port 80 to be exact). If that's good or bad, is another discussion.

Currently, the minikube tunnel command has a hardcoded minimum for when to ask for a sudo password when binding to specific ports.

See here:

On my system, there is no need to get the sudo password to bind to, for example, ports 80 and 443. It would be nice if this command actively checks the settings in sysctl or something before asking for a password. This saves time, unnecessary sudo elevation and time for all devs who want to automate local minikube setups in userspace.

# Output on my system
$ sysctl net.ipv4.ip_unprivileged_port_start
net.ipv4.ip_unprivileged_port_start = 80

Side note: I'm using minikube in userspace using podman

Attach the log file

$ minikube tunnel --alsologtostderr                                                                                                                                                  
I0127 17:56:32.522803  163575 out.go:296] Setting OutFile to fd 1 ...                                                                                                                         
I0127 17:56:32.522862  163575 out.go:348] isatty.IsTerminal(1) = true                                                                                                                         
I0127 17:56:32.522866  163575 out.go:309] Setting ErrFile to fd 2...                                                                                                                          
I0127 17:56:32.522869  163575 out.go:348] isatty.IsTerminal(2) = true                                                                                                                         
I0127 17:56:32.522930  163575 root.go:334] Updating PATH: /home/ingmar/.minikube/bin                                                                                                          
I0127 17:56:32.523092  163575 mustload.go:65] Loading cluster: stroopwafel                                                                                                                    
I0127 17:56:32.523305  163575 config.go:180] Loaded profile config "stroopwafel": Driver=podman, ContainerRuntime=containerd, KubernetesVersion=v1.25.3                                       
I0127 17:56:32.523525  163575 cli_runner.go:164] Run: podman container inspect stroopwafel --format={{.State.Status}}                                                                         
I0127 17:56:32.565135  163575 host.go:66] Checking if "stroopwafel" exists ...                                                                                                                
I0127 17:56:32.565510  163575 cli_runner.go:164] Run: podman system info --format json                                                                                                        
I0127 17:56:32.641851  163575 info.go:288] podman info: {Host:{BuildahVersion:1.28.0 CgroupVersion:v2 Conmon:{Package:conmon-2.1.5-1.fc37.x86_64 Path:/usr/bin/conmon Version:conmon version 2
.1.5, commit: } Distribution:{Distribution:fedora Version:37} MemFree:7211991040 MemTotal:33263648768 OCIRuntime:{Name:crun Package:crun-1.7.2-3.fc37.x86_64 Path:/usr/bin/crun Version:crun v
ersion 1.7.2                                                                                                                                                                                  
commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL} SwapFree:8589930496 SwapTotal:8589930496 Arch:amd64 Cpus:12 Eventlogger:journald Hostname:stroopwafel Kernel:6.1.5-200.fc37.x86_64 Os:linux Security:{Rootless:true} Uptime:7h 6m 12.00s (Approximately 0.29 days)} Registries:{Search:[docker.io]} Store:{ConfigFile:/home/ingmar/.config/containers/storage.conf ContainerStore:{Number:4} GraphDriverName:overlay GraphOptions:{} GraphRoot:/home/ingmar/.local/share/containers/storage GraphStatus:{BackingFilesystem:btrfs NativeOverlayDiff:true SupportsDType:true UsingMetacopy:false} ImageStore:{Number:290} RunRoot:/run/user/1000/containers VolumePath:/home/ingmar/.local/share/containers/storage/volumes}}
I0127 17:56:32.641929  163575 cli_runner.go:164] Run: podman version --format {{.Version}}
I0127 17:56:32.673671  163575 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" stroopwafel
I0127 17:56:32.713739  163575 api_server.go:165] Checking apiserver status ...
I0127 17:56:32.713882  163575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0127 17:56:32.713931  163575 cli_runner.go:164] Run: podman version --format {{.Version}}
I0127 17:56:32.743777  163575 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stroopwafel
I0127 17:56:32.779344  163575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:44749 SSHKeyPath:/home/ingmar/.minikube/machines/stroopwafel/id_rsa Username:docker}
I0127 17:56:32.865084  163575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1025/cgroup
W0127 17:56:32.870323  163575 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1025/cgroup: Process exited with status 1
stdout:

stderr:
I0127 17:56:32.870345  163575 ssh_runner.go:195] Run: ls
I0127 17:56:32.872399  163575 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:39563/healthz ...
I0127 17:56:32.877635  163575 api_server.go:278] https://127.0.0.1:39563/healthz returned 200:
ok
I0127 17:56:32.877662  163575 tunnel.go:70] Checking for tunnels to cleanup...
I0127 17:56:32.880671  163575 cli_runner.go:164] Run: podman version --format {{.Version}}
I0127 17:56:32.910543  163575 cli_runner.go:164] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stroopwafel
I0127 17:56:32.948883  163575 out.go:177] ✅  Tunnel successfully started
✅  Tunnel successfully started
I0127 17:56:32.948917  163575 out.go:177] 

I0127 17:56:32.948925  163575 out.go:177] 📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
I0127 17:56:32.948931  163575 out.go:177] 

I0127 17:56:32.959616  163575 out.go:177] ❗  The service/ingress ingress-nginx-controller requires privileged ports to be exposed: [80 443]
❗  The service/ingress ingress-nginx-controller requires privileged ports to be exposed: [80 443]
I0127 17:56:32.959643  163575 out.go:177] 🔑  sudo permission will be asked for it.
🔑  sudo permission will be asked for it.
I0127 17:56:32.959868  163575 out.go:177] 🏃  Starting tunnel for service ingress-nginx-controller.
🏃  Starting tunnel for service ingress-nginx-controller.
I0127 17:56:32.962362  163575 loadbalancer_patcher.go:122] Patched ingress-nginx-controller with IP 127.0.0.1
I0127 17:56:32.962568  163575 out.go:177] ❗  The service/ingress whoami requires privileged ports to be exposed: [80 443]
❗  The service/ingress whoami requires privileged ports to be exposed: [80 443]
I0127 17:56:32.962598  163575 out.go:177] 🔑  sudo permission will be asked for it.
🔑  sudo permission will be asked for it.
I0127 17:56:32.962848  163575 out.go:177] 🏃  Starting tunnel for service whoami.
🏃  Starting tunnel for service whoami.
[sudo] password for ingmar:

Operating System

Redhat/Fedora

Driver

Podman

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 8, 2023
@idelsink
Copy link
Author

idelsink commented Jun 9, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 9, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 21, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 22, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@idelsink
Copy link
Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Mar 22, 2024
@k8s-ci-robot
Copy link
Contributor

@idelsink: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@idelsink
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants