Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help request: Please provide step by step instructions on using minikube tunnel for MacOS M1 Chips? #13207

Closed
blue928 opened this issue Dec 20, 2021 · 12 comments
Labels
arch/arm64 kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@blue928
Copy link

blue928 commented Dec 20, 2021

I am well aware of challenges of trying to view web apps with Minikube on MacOs using the docker driver. Like many I'm suffering from the same pains expressed in post #11193 . I have painstakingly gone through every single suggestion, but no matter what I do or what tutorial I read, nothing works.

Starting with the docs located at https://minikube.sigs.k8s.io/docs/start/, I select the configuration elements for MacOs and M1. I then follow the instructions. Either the instructions are incomplete, or they are not correct. Whether I have a service type of LoadBalancer or NodePort, whether I use minikube tunnel, minikube tunnel -c, minikube service my-service, minikube service myservice --url, nothing works. Also, following the Skaffold tutorial at https://skaffold.dev/docs/tutorials/developer-journey/ I follow the instructions precisely, but I can never get past the minikube tunnel part, and the curl command is always connection refused.

Virtualbox does not support M1. Hyperkit gives the same errors. I honestly don't care what driver I use at the moment so long as I get it to work.

If you are using a Mac with an M1 chip, and you can get this to work, Please provide exact step by step instructions on how you are doing it. Also, please add your yaml file so I can see how it differs from mine?

Thanks!

minikube version: v1.24.0
commit: 76b94fb

kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/arm64"}

@spowelljr spowelljr added kind/documentation Categorizes issue or PR as related to documentation. arch/arm64 priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. os/macos labels Jan 5, 2022
@vladnicula
Copy link

I struggled for 2-3 hours today with the same issue. Would it be possible to at least add a warning sign in the docs so others like us don't waste their time until this is sorted out?

@michaely-cb
Copy link

I'm stuck for a couple of days on this now..

@vladnicula
Copy link

@michaely-cb there's a solution here on the issues with using ingress. That worked for me.

@michaely-cb
Copy link

@vladnicula That’s encouraging progress! Do you mind sharing that solution?

@vladnicula
Copy link

I think it's this one:#11193 (comment) (had to go to my browser history and search for it hah). I don't understand all that's going on there (yet) but with that setup I was able to receive a response. That @zhan9san fellow is a hero 👍

@zhan9san
Copy link
Contributor

I am glad it works for you.

Besides, I am working on the document, #13806.

@michaely-cb
Copy link

@vladnicula Thanks for pointing out! This also worked for me. And indeed @zhan9san is a hero!

I saw that message being printed after I started tunneling, but I didn't pay good attention to this. I hope the IP resolution mechanism can be refined soon for Macs so this won't be needed further in the future.

@pfisterer
Copy link

pfisterer commented Apr 4, 2022

Not sure if this is related. But for me, minikube tunnel did not expose the nginx ingress. Looking the output of minikube tunnel --cleanup=true --alsologtostderr -v5 I noticed that it only looks for services of type LoadBalancer:

I0404 15:00:26.036668    3449 out.go:176] ✅  Tunnel successfully started
✅  Tunnel successfully started
I0404 15:00:26.054778    3449 out.go:176]

I0404 15:00:26.073360    3449 out.go:176] 📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
📌  NOTE: Please do not close this terminal as this process must stay alive for the tunnel to be accessible ...
I0404 15:00:26.094206    3449 out.go:176]

I0404 15:00:33.247660    3449 loadbalancer_patcher.go:77] kubernetes is not type LoadBalancer, skipping.
I0404 15:00:33.247696    3449 loadbalancer_patcher.go:77] my-cluster-kafka-bootstrap is not type LoadBalancer, skipping.
[...]
I0404 15:00:33.247920    3449 loadbalancer_patcher.go:77] ingress-nginx-controller is not type LoadBalancer, skipping.
I0404 15:00:33.247925    3449 loadbalancer_patcher.go:77] ingress-nginx-controller-admission is not type LoadBalancer, skipping.

I changed the type of service from NodePort to LoadBalancera (using kubectl edit service -n ingress-nginx ingress-nginx-controller) and then it worked immediately.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arch/arm64 kind/documentation Categorizes issue or PR as related to documentation. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. os/macos priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

8 participants