Replies: 1 comment 17 replies
-
Hi @amarrella 👋 My suspicion is that the K8s run pod is not using the same run storage as your local instance; when launching a run, dagit's instance (in your case, the local instance) writes the pipeline run to the run storage, but the K8s pod might have a different run storage configured due to a different run storage on its instance config (resulting in None). To check, could you inspect the instance of the k8s job ( |
Beta Was this translation helpful? Give feedback.
17 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I'm trying to run my pipeline using local dagit and K8SRunLauncher on an existing cluster but the job coordinator seems to fail.
I have set up my (local) instance like this:
When I run the job, it successfully creates a pod, but the pod immediately fails with an error:
When I describe the pods, it seems it's passing my local environment as a python environment (see the args below)
I'm not sure this is related but it's suspicious that it uses my local executable path.
I checked:
To me all seems configured correctly, am I missing something?
I'm able to workaround the issue by using the dagit instance in the kubernetes cluster, but that requires updating the pods with the grpc server as well and i'd like to avoid this during fast iterations.
Beta Was this translation helpful? Give feedback.
All reactions