Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]My app works properly on docker compose but not on minikube #14326

Closed
hiroaki2020 opened this issue Jun 13, 2022 · 13 comments
Closed
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@hiroaki2020
Copy link

What Happened?

My laravel app does not work properly on minikube while it does on docker compose.
My question is "does this happen because of poor resource allocation or this could be caused by something else?"

Symptom
There are 2 types of errors and I get either of them every time I refresh.
The errors are the following.

  1. The web page is not shown and browser says 500 server error.
  2. The web page is shown without background img (picture) and favicon. Browser says 404 not found but I confirmed they are in a related container. In addition, my js code does not work as it is written (Again, it does on docker compose).
    I suspect resource allocated is not enough to run my app properly.
    I use 2 nodes with default node resource setting.
    My app consists of 3 pods and the resource allocation is following.

A. app pod
It contains 1 container and resource allocation is like

resources:
          limits:
            memory: 256Mi
            cpu: 300m
          requests:
            memory: 128Mi
            cpu: 100m

B. web pod
It contains 1 container and resource allocation is like

resources:
          limits:
            memory: 128Mi
            cpu: 100m
          requests:
            memory: 64Mi
            cpu: 50m

C. db pod
It contains 2 init containers with default resource setting and 2 containers with resource allocation like

resources:
          limits:
            memory: 512Mi
            cpu: 300m
          requests:
            memory: 64Mi
            cpu: 50m

I try to minimize resource allocation which might cause this problem.
Or any other cause you can think of? Please share your thoughts.

Attach the log file

I will provide it if necessary.

Operating System

macOS (Default)

Driver

Docker

@hiroaki2020
Copy link
Author

I reduced the number of nodes and replica pods and allocated more resource to each pod but still have the same problem.
I use laravel with inertia.js for my app container which is not good match for microservice or k8s so this might be relevant.

@spowelljr
Copy link
Member

Hi @hiroaki2020, thanks for reporting your issue with minikube.

Do the pods spin up correctly? ie. are they passing healthy and ready checks? Are they constantly restarting?

@spowelljr spowelljr added kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Jun 13, 2022
@hiroaki2020
Copy link
Author

hiroaki2020 commented Jun 14, 2022

@spowelljr

Do the pods spin up correctly? ie. are they passing healthy and ready checks? Are they constantly restarting?

Yes.

I did further research and found out the way to access my app is maybe not appropriate.
I use kubectl proxy --port=8080 to access service resource for my web server pod in cluster (source) and access http://localhost:8080/api/v1/namespaces/my-namespace/services/my-service/proxy/ from browser but minikube documentation only talks about nodeport and loadbalancer. Does minikube actually support kubectl proxy?

@hiroaki2020
Copy link
Author

This(sentence at the bottom of the page) might be relevant.

Some web apps may not work, particularly those with client side javascript that construct URLs in a way that is unaware of the proxy path prefix.

@spowelljr
Copy link
Member

This(sentence at the bottom of the page) might be relevant.

Some web apps may not work, particularly those with client side javascript that construct URLs in a way that is unaware of the proxy path prefix.

Ah, that sounds very relevant to what you're doing, that's probably what's causing your issue.

@hiroaki2020
Copy link
Author

Ah, that sounds very relevant to what you're doing, that's probably what's causing your issue.

I assume this is the cause and give up using kubectl proxy and decide to use minikube service my-service --url following this.
The cmd above outputs

🏃  Starting tunnel for service my-service.
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

and cannot see the output like this.
I ran kubectl -n kube-system rollout restart deployment coredns following this but no luck.
My minikube version is v1.25.2.
I use intel chip Mac PC with Mac OS Monterey version 12.4.
Docker for Mac version 4.8.2.
This is the 1st issue.

The 2nd one is more critical to me. minikube service my-service --url cmd above don't work properly from time to time.
Request for the same URL localhost:port (obtained by running "ps -ef | grep ssh") in browser does not always return the same result. The results can be perfect web page or 500 internal server error.
I use multi-node cluster and that might be relevant.
Other info about my cluster is:

node port svc leads to 2 webserver pods
2 app server pods and a svc leads to them
2 db pods (Statefulsets) and svcs lead to them
2 nodes

and one more question, is this minikube service my-service --url cmd the best way to access minikube cluster from outside? The condition is that I use minikube for local dev env and no public IP can be used.

@spowelljr spowelljr removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Aug 3, 2022
@hiroaki2020
Copy link
Author

I am sorry. The 2nd issue is not caused by minikube but by my session data store settings. Now I can properly access my web page in browser.
However, 1st issue still occurs.
That is, when I run minikube service my-service --url, the output is like

🏃  Starting tunnel for service my-service.
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

and cannot see the output like this.
So I instead run ps -ef | grep ssh to get port number. Is this working as designed?

My minikube version is v1.25.2.
I use intel chip Mac PC with Mac OS Monterey version 12.4.
Docker for Mac version 4.8.2.
Driver is docker.
Container runtime is containerd.

@klaases
Copy link
Contributor

klaases commented Oct 10, 2022

❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

Hi @hiroaki2020, were you able to open terminal in order to run this with Docker driver on darwin / Mac?

/triage needs-information

@k8s-ci-robot k8s-ci-robot added the triage/needs-information Indicates an issue needs more information in order to work on it. label Oct 10, 2022
@zzj0402
Copy link

zzj0402 commented Oct 28, 2022

Same issue; app keeps sending resets.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 26, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 27, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

6 participants