You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 2, 2021. It is now read-only.
you can load a service by its name and then use .Containers to range over all containers in that particular service. https://github.com/CausticLab/go-rancher-gen#service-discovery-objects - this link provides an overview of the structs you can filter and work with in the template.
What I intended to do was getting the serivces by stack without knowing the stack name.
In the result this schould give an auto configuration for an apache proxy.
I intended to look for example for a tomcat container in every stack, and configure a separate virtual host for every stack with such a tomcat container. So I intended not need a pre-known stack name.
This is the rancher-gen template we use to build an nginx configuration.
On line 139 you can see how we define such a server-block template we can reuse for each domain
On line 228 is the logic to generate such an server-block for each service - we use the groupByLabel method to limit the results for us you can do something similar.
Maybe instead of the services you can iterate over Containers with an specific label and then bubble up to the service. You can set the label in your compose file or something like that. However you will need something more labels if you want to autogenerate full configurations i guess.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi,
I need a way to group all containers by stack.
I found the label io.rancher.stack.name which is the stack name.
So I thought about using groupByLabel
But how can I use this over all containers?
I have the funktion services, but this gives me all services. But the services do not have a stack label.
Kind regards
HTPC-Schrauber
The text was updated successfully, but these errors were encountered: