-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The localhost socket connection that failed to connect to the R worker used port 11562 #11
Comments
Hmm, the worker pods should be starting R and trying to connect to the scheduler pod (the commands for this are in So the fact that it says "Worker launch call", which seems to indicate that the master process is trying to start the workers (despite the use of If you try running
and let me know what you get, I might be able to make a suggestion. |
thanks for the reply! Here's what I got when running with
Like you mentioned, looks like it's trying to start a worker?
Here's the worker log:
|
Hmm, it does look like the worker pod is running. It should have a running R process that tries to connect to the scheduler pod on port 11562. You could try connecting to a shell in the worker pod to see if an R process is running in the pod.
and then run
You could also try to manually start the R worker process when you are in a shell on the worker pod to see if it gives you an error.
Then in your RStudio session (the scheduler process) you could try running Given that your pods have been running for so many hours, there might be some timeout issue, so you may want to start from scratch by running |
Thanks for your helm chart and documentation. I was able to follow the instructions to install the rstudio server and workers without any problem. However, when I tried to run:
plan(cluster, manual = TRUE, quiet = TRUE)
, I got the following error after 120 second timeout:I did get the correct number of workers when I tried
nbrOfWorkers()
. I am pretty new to RStudio so maybe I am missing something obvious?The text was updated successfully, but these errors were encountered: