Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Load Balanced Slave Cluster #46

Open
goffinf opened this issue Aug 17, 2017 · 2 comments
Open

Load Balanced Slave Cluster #46

goffinf opened this issue Aug 17, 2017 · 2 comments
Labels

Comments

@goffinf
Copy link

goffinf commented Aug 17, 2017

Hey @maxfields2000 Max,

this is more a question around YADP but I was wondering whether you had (a) come across it (and maybe have a successful implementation) and (b) think its a good idea or not to assign multiple docker hosts to a single YADP Cloud under a load balancer.

I have also asked this question on the YADP issues log here : KostyaSha/yet-another-docker-plugin#183

What I want to do is provide a cluster of docker hosts assigned to a YADP Cloud that are available to spin up ephemeral slaves, ... to provide both scalability and resilience. I seem to recall that amongst your blog posts you suggested you were running a 'farm' of hosts for that purpose (but I could be making a leap here)

So, what I did was to use a value for the Docker URL property in the YADP Cloud specification to point to a load balancer (in my case and AWS ELB) under which there are multiple docker host instances.

My observed behaviour:

If I have a SINGLE instance under the ELB, slave containers launch successfully and run whatever job you ask them to, harvest the results and then terminate cleanly.

If I attach a second host to the ELB and execute a Jenkins job, multiple containers launch on BOTH hosts (and continue to do so until you abort the job).

In the Jenkins log you see lots of exception messages like this (with differing container ids) ...

Error during callback
com.github.kostyasha.yad_docker_java.com.github.dockerjava.api.exception.NotFoundException {"message":"No such container: f859ea6e2707d21bb9a2d585713fbc262629b2926f0d86453d28d76bf48fd811"}

When you abort the job you have a bunch of containers to clean up on both hosts as well as a failed job.

So, I'm not sure if this plugin only works with single docker hosts or it can be configured to a load balanced cluster (I prefer the latter).

Do you think this is a reasonable approach, or would you suggest that a YADP Cloud should only ever refer to a single docker host ?

I had some further thoughts about defining my docker hosts within a docker swarm cluster (since the swarm master know where all the containers are and should be able to route to them given a container or service id). However, in reading over the YADP issues log it appears that the plug-in doesn't support swarm mode (I haven't tried it yet but swarm's unit of deployment is a 'service' rather than an individual container).

Any thoughts much appreciated.

Regards

Fraser.

@SalahAdDin
Copy link

SalahAdDin commented Aug 20, 2017

@maxfields2000 hi man.

@maxfields2000
Copy link
Owner

So I'm not entirely sure about an ELB balanced cluster. The way we use this at Riot is via a Docker Swarm (these days it would be a Docker "Standalone" Swarm). The swarm end point load balances the cluster and responds properly to Docker API calls so the containers do in fact get cleaned up.

The error your getting seems to imply that that the ELB endpoint is not properly handling the requests to stop/kill containers because they can't be seen/etc. I suspect that won't work. But if you go to AWS and build a Docker Standalone swarm (see here: https://github.com/docker/swarm) out of a set of Dockerhosts and use that endpoint you'll be fine!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants