-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx ingress controller is not working correctly on minikube #26
Comments
Hi @wwkicq1234 fabric-operator is directly targeting the core k8s APIs, so it should work on any Kubernetes environment, including minikube. You may be one of the first people to try this out, so if you find the recipe then please share the configuration notes, either with a docs PR or update to the sample network scripts, or just some general notes in this Issue. We have had very good luck with KIND clusters, but there are some drawbacks to using this platform. The one issue in particular that is incredibly annoying is that the KIND runtime does NOT have direct visibility to the Docker image cache on the host system. So for instance if you are using a local operator / cluster to develop a chaincode container, the image either needs to be uploaded to a container registry or loaded directly into the KIND control plane. The KIND cluster created by When the sample network configures the k8s cluster with Nginx ingress, it uses the Try using the following env settings when running export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="local-path" Once the Ingress issues are sorted out, the other two problems that will come up with minikube are going to be:
In general, yes minikube can work, but it looks like there are still a couple of rough edges in the setup for the Ingress and PVC that need to get sorted out. Thanks for opening the Issue. This is an honest to goodness bug. |
Thanks for your quickly response. Your comments solved my issue. Then i encountered the issue that storage-class "local-path" isn't exiting. I solved it by 'export STORAGE_CLASS="standard"'. |
I encountered another issue with "network channel create" command failure. From the log, I can see the following error:
|
Hi @wwkicq1234 . Thanks for pushing on this issue. Running fabric on a local minikube seems like a really, really nice advancement. There are some .. "issues" that come up with KIND that make it not 100% ideal for a local dev platform. Likewise the switch to rancher desktop / k3s / containerd comes with some headache. It would / will be good for the fabric-operator to "just work" on any k8s - even minikube. Looks like there's still some work to sort out on this front. The logs above look like a problem with the Nginx ingress controller. There can be a few things going on - but first, make sure you don't have any other services running on the loopback / 0.0.0.0:443 or 127.0.0.1:443. Also - depending on your setup it is also possible to run k8s in VMs, or behind a network bridge of some kind... The path to test is to make sure that something like "curl" running on the host OS can resolve and open a TCP port to the ingress controller. If things are working correctly, after ingress is up and running you should be able to:
I do see that there are specific instructions for installing Nginx on minikube, so perhaps we will need to supplement the approach used in the network setup to work with the installer. The only change that is necessary to work with Nginx is to enable the I think the current approach is close, but needs a little TLC to push it out. It is warranted to set up with a new If you are feeling ambitious and generous - please feel free to submit a PR back with the new runtime, if you can sort out the details! Minikube is great - it will be a big step forward for "hey it just works" if we can get this new runtime under the fabric-operator umbrella. |
With minikube environment, I got the following error:
error: timed out waiting for the condition on pods/ingress-nginx-controller-7587b7f44c-7rfxc
The ingress-nginx-controller pod is blocked with the following error:
Warning FailedScheduling 26m (x140 over 168m) default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector.
The text was updated successfully, but these errors were encountered: