You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think instead of a deployment on openshift, a job would be a better type for ephemeral spark clusters as after the application is finished, it would terminate and not occupy resources and also it would be easier to schedule them as cronjobs.
Currently if I create a job, it does not create an ephemeral spark cluster, instead it creates a shared cluster which is not deleted when the job is finished.
The text was updated successfully, but these errors were encountered:
A simple work-around for this: https://github.com/4n4nd/oc_train_pipeline/blob/master/delete_spark.sh as suggested by elmiko.
Just run this script after you are done with your spark cluster (I run it after sc.stop()) and it will forcefully delete it, so name your cluster carefully if you're naming it manually.
I think instead of a deployment on openshift, a job would be a better type for ephemeral spark clusters as after the application is finished, it would terminate and not occupy resources and also it would be easier to schedule them as cronjobs.
Currently if I create a job, it does not create an ephemeral spark cluster, instead it creates a shared cluster which is not deleted when the job is finished.
The text was updated successfully, but these errors were encountered: