Releases: volcano-sh/volcano
v1.10.0
What's New
Support Queue Priority Scheduling Strategy
In traditional big data processing scenarios, users can directly set queue priorities to control the scheduling order of jobs. To ease the migration from Hadoop/Yarn to cloud-native platforms, Volcano supports setting priorities at the queue level, reducing migration costs for big data users while enhancing user experience and resource utilization efficiency.
Queues are a fundamental resource in Volcano, each with its own priority. By default, a queue's priority is determined by its share
value, which is calculated by dividing the resources allocated to the queue by its total capacity. This is done automatically, with no manual configuration needed. The smaller the share
value, the fewer resources the queue has, making it less saturated and more likely to receive resources first. Thus, queues with smaller share
values have higher priority, ensuring fairness in resource allocation.
In production environments—especially in big data scenarios—users often prefer to manually set queue priorities to have a clearer understanding of the order in which queues are scheduled. Since the share
value is dynamic and changes in real-time as resources are allocated, Volcano introduces a priority
field to allow users to set queue priorities more intuitively. The higher the priority
, the higher the queue's standing. High-priority queues receive resources first, while low-priority queues have their jobs reclaimed earlier when resources need to be recycled.
To ensure compatibility with the share
mechanism, Volcano also considers the share value when calculating queue priorities. By default, if a user has not set a specific queue priority or if priorities are equal, Volcano will fall back to comparing share values. In this case, the queue with the smaller share has higher priority. Users have the flexibility to choose between different priority strategies based on their specific needs—either by using the priority or the share method.
Queue priority design doc: Queue priority
Related PRs: (#132, #3700, @TaiPark)
Enable Fine-Grained GPU Resource Sharing and Reclaim
Volcano introduced the elastic queue capacity scheduling feature in version v1.9, allowing users to directly set the capacity for each resource dimension within a queue. This feature also supports elastic scheduling based on the deserved
value, enabling more fine-grained resource sharing and recycling across queues.
For detailed design information on elastic queue capacity scheduling, refer to the Capacity Scheduling Design Document.
For a step-by-step guide on using the capacity plugin, see the Capacity Plugin User Guide.
In version v1.10, Volcano extends its support to include reporting different types of GPU resources within elastic queue capacities. NVIDIA's default Device Plugin
does not distinguish between GPU models, instead reporting all resources uniformly as nvidia.com/gpu
. This limits AI training and inference tasks from selecting specific GPU models, such as A100 or T4, based on their particular needs. To address this, Volcano now supports reporting distinct GPU models at the Device Plugin
level, working with the capacity
plugin to enable more precise GPU resource sharing and recycling.
For instructions on using the Device Plugin
to report various GPU models, refer to the GPU Resource Naming Guide.
Note:
In version v1.10.0, the capacity
plugin is the default for queue management. Note that the capacity
and proportion
plugins are incompatible, so after upgrading to v1.10.0, you must set the deserved
field for queues to ensure proper functionality. For detailed instructions, refer to the Capacity Plugin User Guide.
The capacity
plugin allocates cluster resources based on the deserved
value set by the user, while the proportion
plugin dynamically allocates resources according to queue weight. Users can select either the capacity
or proportion
plugin for queue management based on their specific needs. For more details on the proportion plugin, visit: Proportion Plugin.
Related PR: (#68, @MondayCha)
Introduce Pod Scheduling Readiness Support
Once a Pod is created, it is considered ready for scheduling. In Kube-scheduler, it will try its best to find a suitable node to place all pending Pods. However, in reality, some Pods may be in a "lack of necessary resources" state for a long time. These Pods actually interfere with the decision-making and operation of the scheduler (and downstream components such as Cluster AutoScaler) in an unnecessary way, causing problems such as resource waste. Pod Scheduling Readiness is a new feature of Kube-sheduler. In Kubernetes v.1.30 GA, it has become a stable feature. It controls the scheduling timing of Pods by setting the schedulingGates field of the Pod.
Volcano supports unified scheduling of online and offline jobs. In order to better support the scheduling of microservices, we also support Pod Scheduling Readiness scheduling in Volcano v1.10 to meet the scheduling needs of users in multiple scenarios.
For the documentation of Pod Scheduling Readiness features, please refer to: Pod Scheduling Readiness | Kubernetes
Related PR: (#3581, @ykcai-daniel)
Add Sidecar Container Scheduling Capabilities
A Sidecar container is an auxiliary container designed to support the main business container by handling tasks such as logging, monitoring, and network initialization.
Prior to Kubernetes v1.28, the concept of Sidecar containers existed only informally, with no dedicated API to distinguish them from business containers. Both types of containers were treated equally, which meant that Sidecar containers could be started after the business container and might end before it. Ideally, Sidecar containers should start before and finish after the business container to ensure complete collection of logs and monitoring data.
Kubernetes v1.28 introduces formal support for Sidecar containers at the API level, implementing unified lifecycle management for init containers, Sidecar containers, and business containers. This update also adjusts how resource requests and limits are calculated for Pods, and the feature will enter Beta status in v1.29.
The development of this feature involved extensive discussions, mainly focusing on maintaining compatibility with existing APIs and minimizing disruptive changes. Rather than introducing a new container type, Kubernetes reuses the init container type and designates Sidecar containers by setting the init container’s restartPolicy to Always. This approach addresses both API compatibility and lifecycle management issues effectively.
With this update, the scheduling of Pods now considers the Sidecar container’s resource requests as part of the business container’s total requests. Consequently, the Volcano scheduler has been updated to support this new calculation method, allowing users to schedule Sidecar containers with Volcano.
For more information on Sidecar containers, visit Sidecar Containers | Kubernetes.
Related PR: (#3706, @Monokaix, @7h3-3mp7y-m4n)
Enhance Vcctl Command Line Tool
vcctl is a command line tool for operating Volcano's built-in CRD resources. It can be conveniently used to view/delete/pause/resume vcjob resources, and supports viewing/deleting/opening/closing/updating queue resources. Volcano has enhanced vcctl in the new version, adding the following features:
-
Support creating/deleting/viewing/describing
jobflow
andjobtemplate
resources -
Support querying vcjob in a specified queue
-
Support querying Pods by queue and vcjob filtering
For detailed guidance documents on vcctl, please refer to: vcctl
Command Line Enhancement.
Relared PRs: (#3584, #3543, #3530, #3524, #3508, @googs1025)
Ensure Compatibility with Kubernetes v1.30
Volcano closely follows the pace of Kubernetes community versions and supports every major version of Kubernetes. The latest supported version is v1.30, and runs complete UT and E2E use cases to ensure functionality and reliability.
If you want to participate in the development of Volcano adapting to the new version of Kubernetes, please refer to: adapt-k8s-todo for community contributions.
Related PR: (#3556, @guoqinwill, @wangyysde)
Strengthen Volcano Security Measures
Volcano has always attached great importance to the security of the open source software supply chain. It follows the specifications defined by OpenSSF in terms of license compliance, security vulnerability disclosure and repair, warehouse branch protection, CI inspection, etc. Volcano recently added a new workflow to Github Action, which will run OpenSSF security checks when the code is merged, and update the software security score in real time to continuously improve software security.
At the same time, Volcano has reduced the RBAC permissions of each component, retaining only the neces...
v1.9.0
What's New
Support elastic queue capacity scheduling
Volcano now uses the proportion plugin for queue management. Users can set the guarantee, capacity and other fields of the queue to set the reserved resources and capacity limit of the queue. And by setting the weight value of the queue to realize the resource sharing within the cluster, the queue is proportionally divided into cluster resources according to the weight value, but this queue management method has the following problems:
- The capacity of the resources divided by the queue is reflected by the weight, which is not intuitive enough.
- All resources in the queue are divided using the same ratio, and the capacity cannot be set separately for each dimension of the queue.
Based on the above considerations, Volcano implements a new queue elasticity capacity management capability, it supports:
- Allows users to directly set the capacity of each dimension of resources for the queue instead of setting a weight value.
- Elastic capacity scheduling based deserved resources, and queue's resources can be shared and reclaimed back.
For example, in AI large model training scenario, setting different resource capacities for different GPU models in the queue, such as A100 and V100, respectively. At the same time, when the cluster resources are idle, the queue can reuse the resources of other idle queues, and when needed, reclaim the resources set by the user for the queue, that is, the amount of resources deserved, so as to realize the elastic capacity scheduling.
To use this feature, you need to set the deserved field of the queue and set the amount of resources to be deserved for each dimension. At the same time, you need to turn on the capacity plugin and turn off the proportion plugin in the scheduling configuration.
Please refer to Capacity Scheduling Design for more detail.
Capacity scheduling example: How to use capacity plugin.
Related PR: (#3277, #121, #3283, @Monokaix)
Support affinity scheduling between queues and nodes
Queues are usually associated with departments within the company, and different departments usually need to use different heterogeneous resource types. For example, the large model training team needs to use NIVDIA’s Tesla GPU, and the recommendation team needs to use AMD’s GPU. When users submit jobs to the queue , the job needs to be automatically scheduled to the node of the corresponding resource type according to the attributes of the queue.
Volcano has implemented affinity scheduling capabilities for queues and nodes. Users only need to set the node label that require affinity in the affinity field of the queue. Volcano will automatically schedule jobs submitted to the current queue to the nodes associated with the queue. Users do not need to Set the affinity of the job separately, and only need to set the affinity of the queue uniformly. Jobs submitted to the queue will be scheduled to the corresponding node based on the affinity of the queue and the node.
This feature supports hard affinity, soft affinity, and anti-affinity scheduling at the same time. When using it, you need to set a label with the key volcano.sh/nodegroup-name
for the node, and then set the affinity field of the queue to specify hard affinity, soft affinity label values.
The scheduling plugin for this feature is called nodegroup, for a complete example of its use see: How to use nodegroup plugin.
For detailed design documentation, see The nodegroup design.
Related PR: (#3132, @qiankunli, @wuyueandrew)
GPU sharing feature supports node scoring scheduling
GPU Sharing is a GPU sharing and isolation solution introduced in Volcano v1.8, which provides GPU sharing and device memory control capabilities to enhance the GPU resource utilization in AI training and inference scenarios. v1.9 adds a new scoring strategy for GPU nodes on top of this feature, so that the optimal node can be selected during job assignment to further enhance resource utilization. Users can set different scoring strategies. Currently, the following two strategies are supported:
-
Binpack: Provides a binpack algorithm for GPU card granularity, prioritizing to fill up a node with GPU cards that have already been allocated resources to avoid resource fragmentation and waste.
-
Spread: Prioritizes the use of idle GPU cards over shared cards that have already been allocated resources.
For detailed usage documentation, please refer to: How to use gpu sharing.
Related PR: (#3471, @archlitchi)
Volcano support Kubernetes v1.29
Volcano version follows the Kubernetes community version tempo and supports every base version of Kubernetes. The latest supported version is v1.29 and ran full UT, E2E use cases to ensure functionality and reliability. If you would like to participate in the development of Volcano adapting to new versions of Kubernetes, please refer to: #3459 to make community contributions.
Related PR: (#3295, @guoqinwill)
Enhance scheduler metrics
Volcano uses the client-go to talk with Kubernetes. Although the client can set the QPS to avoid requests from being flow-limited, it is difficult to observe how many QPS is actually used by the client, so in order to observe the frequency of requests from the client in real time, Volcano has added a new client-go metrics, which allows users to access the metrics to see the number of GET, POST and other requests per second, so as to get the actual QPS used per second, and thus decide whether or not the client needs to adjust the QPS. The client-go metrics also include client certificate rotation cycle statistics, response size per request statistics, etc.
Users can use curl http://$volcano_scheduler_pod_ip:8080/metrics to get all the detailed metrics of volcano scheduler.
Related PR: (#3274, @Monokaix)
Add license compliance check
In order to enhance the open source license compliance governance standards of the Volcano community, avoid the introduction of infectious open source protocols, and avoid potential risks, the Volcano community has introduced an open source license compliance checking tool. The so-called infectious protocol refers to software that uses this protocol as an open source license. Derivative works generated after modification, use, and copying must also be open sourced under this agreement. If the third-party library introduced by the PR submitted by the developer contains infectious open source protocols such as GPL, LGPL, etc., CI Access Control will intercept it. The developer needs to replace the third-party library with a loose free software license protocol such as MIT, Apache 2.0, BSD, etc. , to pass the open source license compliance check.
Related PR: (#3308, @Monokaix)
Improve scheduling stability
Volcano v1.9.0 has done more optimization in preemption, retry for scheduling failure, avoiding memory leaks, security enhancement, etc. The details include:
- Fix the problem of pods not being able to be scheduled due to frequent expansion and contraction of deployment in extreme cases, see PR for details: (#3376, @guoqinwill)
- Fix Pod preemption: see PR for details: (#3458, @LivingCcj)
- Optimize Pod scheduling failure retry mechanism: see PR for details: (#3435,@bibibox)
- Metrics metrics optimization: (#3463, @Monokaix)
- Security enhancements: (#3449, @lekaf974)
Changes
- cherry-pick bugfixs (#3464 @Monokaix)
- fix nil pointer panic when evict (#3443 @bibibox)
- fix errTask channel memory leak (#3434 @bibibox)
- register nodegroup plugin to factory (#3402 @wuyueandrew)
- fix panic when the futureIdle resources are calculated to be negative (#3393 @Lily922)
- fix jobflow CRD metadata.annotations: Too long error (#3356 @guoqinwill)
- fix PodGroup being incorrectly deleted due to frequent creation and deletion of pods (#3376 @guoqinwill)
- fix rollback unthoroughly when allocate error (#3360 @bibibox)
- fix panic when the gpu is faulty (#3355 @guoqinwill)
- Support preempting BestEffort pods when the pods number of nodes reaches the upper limit (#3338 @Lily922)
- change private function 'unmarshalSchedulerConf' to public function 'UnmarshalSchedulerConf' (#3333 @Lily922)
- add pdb support feature gate (#3328 @bibibox)
- add mock cache for ut and performance test (#3269 @lowang-bh)
- support dump scheduler cache snapshot to json file (#3162 @lowang-bh)
- Volcano adapts to the k8s v1.29 (#3295 @guoqinwill)
- add lowang-bh as reviewer ([#32...
v1.8.2
Changes since v1.8.1
- fix wrong pods field format output of queue status (#3287 @Monokaix)
- add ignored csi provisioner when compute csi resources (#3286 @Monokaix)
- fix k8s.io/dynamic-resource-allocation go mod not found err (#3272 @Monokaix)
- fix: json marsh error for unsupport type: func() (#3282 @lowang-bh)
- fix job CRD metadata.annotations: Too long error (#3267 @Monokaix)
- fix queue update validation err when status.allocated empty ( #3266 @Monokaix)
- fix grafana dashboard format err (#3265 @Monokaix)
- update parameter BestEffort of taskInfo after changing parameter InitResreq (#3232 @Lily922)
- fix: allocated field in queue status is calcutated error (#3221 @shusley244)
- Avoid repeatedly creating links to obtain node metrics (#3229 @wangyang0616)
- skip 'pods' resource when checking if the Resource is empty (#3224 @Lily922)
- queue realcapability change to min dimension of queue capability and … (#3219 @Monokaix)
- support preemption when the number of pods of a node reaches the upper limit (#3202 @Lily922)
- Delete duplicate logs generated by the predicate_helper method (#3214 @guoqinwill)
- support preempting task with bound status (#3209 @Lily922)
- support preemption when the number of attachment volumes of a node reaches the upper limit (#3212 @Lily922)
- fix: task scheduling latancy metrics is not accurate (#3128 @lowang-bh)
- backfill add score process (#3164 @lowang-bh)
- Obtains the actual load data of a node from the custom metrics API (#3181 @wangyang0616)
- Update the default value of parameter worker-threads-for-podgroup to 5 (#3180 @Lily922)
- update volcano.sh/apis version (#3166 @Lily922)
v1.8.1
Changes since v1.8.0
- fix: the pod anti-affinity constraint fails (#3140 @wangyang0616)
- add podGroup status to session cache, fix the bug of repeatedly sending pordGroup update request when there is no condations field. (#3125 @Lily922)
- add reSync task callback (#3119 @Monokaix)
- successfully scheduled events will not be reported repeatedly for podGroup resource (#3117 @Lily922)
- add reSync task callback (#3114 @Monokaix)
- volcano adapt k8s v1.27 (#3101 @Mufengzhe)
- add featuregates for volcano capabilities (#3093 @Monokaix)
- msg information optimization; preemption logic optimization (#3082 @wangyang0616)
- fix nodelock issue when using gang-scheduling (#3060 @wangyang0616)
- pods are preferentially scheduled to machines that meet the current session resources (#3035 @wangyang0616)
- optimize the jobflow architecture design diagram (#3025 @wangyang0616)
- use one command of helm install to do smooth upgrade (#3017 @lowang-bh)
- remove node out of sync state (#3006 @Monokaix)
- fix: the task pipeline status is incompatible with cluster autoscaler (#3002 @wangyang0616)
- when Volcano is uninstalled, two resources will remain (#2992 @gj199575)
What's Changed
- [cherry-pick for release-1.8] msg information optimization; preemption logic optimization by @wangyang0616 in #3082
- [cherry-pick for release-1.8]Add featuregates for volcano capabilities by @Monokaix in #3093
- [cherry-pick for release 1.8]volcano adapt k8s v1.27 by @Mufengzhe in #3101
- [cherry-pick for release-1.8]successfully scheduled events will not be reported repeatedly for podGroup resource by @Lily922 in #3117
- [cherry-pick for release-1.8]Add reSync task callback by @Monokaix in #3119
- [cherry-pick for release-1.8]Add podGroup status to session cache, fix the bug of repeatedly sending pordGroup update request when there is no condations field. by @Lily922 in #3125
- Update image version for release v1.8.1 by @Mufengzhe in #3136
- [cherry-pick for release-1.8]fix: the pod anti-affinity constraint fails by @wangyang0616 in #3140
- [cherry-pick for release-1.8]:feat:add printing of MemStats in dumpall by @xiao-jay in #3098
Full Changelog: v1.8.0...v1.8.1
v1.8.0
What's New
Add JobFlow to support lightweight workflow orchestration
The workflow orchestration engine is widely used in high-performance computing, AI biomedicine, image processing, beauty, game AGI, scientific computing and other scenarios, helping users simplify the management of multiple parallel tasks and dependencies, and greatly improving the overall computing efficiency.
JobFlow is a lightweight task flow orchestration engine that focuses on Volcano job orchestration. It provides Volcano with job probes, job completion dependencies, job failure rate tolerance, and other diverse job dependency types, and supports complex process control primitives. The specific capabilities are as follows:
- Support large-scale job management and complex task flow orchestration.
- Support real-time query of the running status and task progress of all associated jobs.
- Support automatic operation of jobs and scheduled start to release labor costs.
- Various action strategies can be set for different tasks, and corresponding actions can be triggered when the task meets certain conditions, such as timeout retry, node failure drift.
Refer to the links for more details. (JobFlow doc, @hwdef, @lowang-bh, @zhoumingcheng)
Support vGPU scheduling and isolation
Since the outbreak of ChatGPT, there have been more and more research and development of AI large models, and different types of AI large models have been launched one after another. In production environment, users have pain points such as low resource utilization and inflexible GPU resource allocation. They have to purchase a large amount of redundant heterogeneous computing power to meet business needs, and heterogeneous computing power itself is expensive. It has brought a great burden to the development of the enterprise.
Starting from version 1.8, Volcano provides an abstract general framework for sharing devices (GPU, NPU, FPGA...), developers can customize multiple types of shared devices based on this framework. Currently Volcano has supported GPU device multiplexing, resource isolation based on this framework, details are as follows:
- GPU sharing: Each task can apply to use part of the resources of a GPU card, and the GPU card can be shared among multiple tasks.
- Device memory control: GPU can be allocated according to device memory (for example: 3000M) or allocated in proportion (for example: 50%) to realize GPU virtualization resource isolation capability.
Refer to the links for more details.
- How to use vGPU function (@archlitchi)
- How to add a new heterogeneous computing power sharing strategy (@archlitchi)
Support the preemption capability for GPU and user-defined resources
Currently, Volcano supports CPU, Memory and other basic resource preemption. GPU resources and user self-managed resources such as NPU, network resources have not been supported yet.
In version 1.8, the predication is refactored to provide more detailed response such as Unschedulable and UnschedulableAndUnresolvable for different scenarios.
The GPU preemption function has been released based on the optimized framework, and the user developed scheduling plugins based on Volcano can be adapted and upgraded according to business scenarios.
Refer to the link for more details. (#2916, @wangyang0616)
Support ElasticSearch monitoring systems in node load-aware scheduling and rescheduling
The status of the kubernetes cluster changes in real time with the creation and termination of tasks. In some scenarios such as adding or deleting nodes, changing the affinity of Pods and Nodes, and dynamically changing the lifecycle of jobs, etc. The following problems will occur. Resource utilization is unbalanced, node performance bottlenecks are offline, etc. At this time, load aware scheduling and rescheduling can help user solve the above problems.
Prior to Volcano version 1.8, the load awareness scheduling and rescheduling only supports Prometheus. Starting from version 1.8, Volcano optimizes the monitoring index acquisition framework and adds support for ElasticSearch monitoring system.
Refer to the links for more details.
Optimize Volcano's ability to schedule microservices
Add Kubernetes default scheduler plugin enable and disable switch
Volcano is a unified integrated scheduling system that not only supports computing jobs such as AI and BigData, but also supports microservice workloads. It is compatible with scheduling plugins such as PodTopologySpread, VolumeZone, VolumeLimits, NodeAffinity, and PodAffinity of the Kubernetes default scheduler, and Kubernetes default scheduling plugins capabilities Enabled by default in Volcano.
Since Volcano 1.8, the Kubernetes default scheduling plugins can be freely selected to be turned on and off through the configuration file, and all of them are turned on by default. If you choose to turn off some plugins, such as: turn off the PodTopologySpread and VolumeZone plugins, you can set the corresponding values in the predicate plugin is false.
Refer to the links for more details. (#2748, @jiangkaihua)
Enhance scheduler to keep compatibility with ClusterAutoscaler
In the Kubernetes platform, Volcano is not only used as a scheduler for batch computing services, but also used as a scheduler for general services. Node horizontal scaling is one of the core functions of Kubernetes, which plays an important role in coping with the surge of user traffic and saving operating costs. Volcano optimizes job scheduling and other related logic, and enhances the compatibility and interaction with ClusterAutoscaler, mainly in the following two aspects:
- The pod that enters the pipeline state in the scheduling phase triggers capacity expansion in time.
- Candidate nodes are graded in gradients to reduce the impact of cluster terminating pods on scheduling load, and prevent pods from entering invalid pipeline states, resulting in cluster expansion by mistake.
Refer to the links for more details. (#2782, #3000, @wangyang0616)
Provide tolerance for exception of device plugin
When device plugin crashs or fails to report resouces for some reason and the total resource amount of the node is less than the allocated resource amount, Volcano considers that the node data is inconsistent, make the node as OutOfSync and isolates the node, and stops scheduling any new workload to the node. The isolocation machinism brought some impact to the cluster for example device plugin has no chance to be scheduled to the OutOfSync node. In Volcano v1.8, the machinism is enhanced to tolerate the exception of device plugin, the non-GPU workload like device plugin is still allowed to be scheduled to OutOfSync node.
Refer to the link for more details. (#2999, @Monokaix)
Add helm charts for Volcano
As Volcano is used in production environments and cloud environments with more and more users, simple and standard installation actions are crucial. Since version 1.8, Volcano has optimized charts package publishing and archiving actions, standardized the installation and use process, and completed the migration of historical versions v1.6 and v1.7 to the new helm warehouse.
Refer to the link for more details. (Volcano helm-charts, @wangyang0616)
Other Notable Changes
- rework device sharing in volcano(#2643, @archlitchi)
- style(resource_info): replace 0, -1 with Zero,Infinity(#2650, @kingeasternsun)
- perf(preempt): remove used copy(#2652, @kingeasternsun)
- Add podGroup completed phase(#2667, @waiterQ)
- delete redundant import alias(#2675, @shoothzj)
- delete redundant type convetion(#2627, @shoothzj)
- Extract MetricsClient and NodeMetrics to support other metrics platform(#2678, @shoothzj)
- upgrade klog package version to latest (#2682, @waiterQ)
- Update how_to_use_gpu_sharing.md(#2686, @z2Zhang)
- Rename AddPrePredicateFn annotation(#2689, @zbbkeepgoing)
- Remove duplicate import in session.go(#2690, @zbbkeepgoing)
- Optimize e2e runtime: reduce pytorch-plugin image download time(#2691, @wangyang0616)
- Fix typo in tdm-plugin.md(#2692, @shoothzj)
- volcano metrics source support elasticsearch (#2694, @shoothzj)
- Skip stmt when tasks is empty (#2696, @zbbkeepgoing)
- Add rescheduling related location logs ([#2698](https://github.com/volcano-...
v1.7.0
What's New
Enhanced Plugin for PyTorch Jobs
As one of the most popular AI frameworks, PyTorch has been widely used in deep learning fields such as computer vision and natural language processing. More and more users turn to Kubernetes to run PyTorch in containers for higher resource utilization and parallel processing efficiency.
Volcano 1.7 enhanced the plugin for PyTorch Jobs, freeing you from the manual configuration of container ports, MASTER_ADDR, MASTER_PORT, WORLD_SIZE, and RANK environment variables.
Other enhanced plugins include those for TensorFlow, MPI, and PyTorch Jobs. They are designed to help you run computing jobs on desired training frameworks with ease.
Volcano also provides an extended development framework for you to tailor Job plugins to your needs.
Refer to the links for more details. (#2313, @ccchenjiahuan)
Ray on Volcano
Ray is a unified framework for extending AI and Python applications. It can run on any machine, cluster, cloud, and Kubernetes cluster. Its community and ecosystem are growing steadily.
As machine learning workloads are hosting computing jobs at a density higher than ever before, single-node environments are failing in providing enough resources for training tasks. Here's where Ray comes in, which seamlessly coordinates resources of the entire cluster, instead of a single node, to run the same set of code. Ray is designed for common scenarios and any type of workloads.
For users running multiple types of Jobs, Volcano partners with Ray to provide high-performance batch scheduling. Ray on Volcano has been released in KubeRay 0.4.
Refer to the links for more details. (#2601(#755) @tgaddair)
Enhance Scheduling for Kubernetes long-running services
This enhancement makes Volcano fully compatible with the Kubernetes default scheduler for long-running services. With this enhancement, users can use Volcano to uniformly schedule long-running services and batch workloads in a single cluster.
Refer to the links for more details:
- support multi scheduler name for scheduler and webhook(#2393, @jinzhejz)
- Add nodeVolumeLimits plugin (#2458, @jiangkaihua)
- Volcano support volumeZone plugin (#2480, @jiangkaihua)
- Add podTopologySpread plugin (#2487, @Monokaix)
- Add selector spread plugin (#2500, @elinx)
Support Kubernetes v1.25
This feature is designed to make Volcano compatible with Kubernetes 1.25.
Refer to the links for more details. (#2533, @wangyang0616)
Support multi-arch images for Volcano
This feature is designed to cross-compile volcano images of different architectures. For example, compile an image for the ARM64 architecture on an AMD64 machine.
Refer to the links for more details.(#2435, @ccchenjiahuan)
Optimize Queue Status Information
This feature is designed to enrich the information of the queue. Through this function, users can view the resource allocation of queues in real time, which is convenient for administrators to dynamically plan resources.
Refer to the links for more details.(#2592, @jiangkaihua)
Other Notable Changes
- change enqueue to optional action(#2309, @wpeng102)
- Add documentation on ttlSecondsAfterFinished(#2314, @jsolbrig)
- remove redundant parentheses(#2316, @lucming)
- update go.mod to add queue.spec.Affinity(#2319, @qiankunli)
- Support JobReady for extender plugin(#2334, @xiaoxubeii)
- add jobflow desgin docs(#2339, @zhoumingcheng)
- deploy webhook by yaml(#2346, @hwdef)
- add details for nodegroup doc(#2347, @qiankunli)
- change e2e dependencies of makefile(#2350, @lucming)
- update go to 1.18(#2353, @hwdef)
- clean up the code(#2360, @lucming)
- add csiNode cache for plugin(#2371, @wpeng102)
- add rest config into ssn(#2378, @wpeng102)
- Update field comment(#2386, @zhoumingcheng)
- use patch to replace update pod operator(#2392, @wpeng102)
- get csinodes from ssn(#2399, @wpeng102)
- Consider initContainer GPUs quota in calculating(#2423, @kerthcet)
- Some cleanups in job_info.go(#2434, @kerthcet)
- Add initContainer GPU number when calculating GPUs(#2440, @kerthcet)
- Optimize the way to build images in makefile(#2445, @hwdef)
- add a flag to control whether inherit owner annotations when podgroup…(#2461, @elinx)
- Update CA insert method in webhooks(#2463, @jiangkaihua)
- chore: remove duplicate word in comments(#2470, @Abirdcfly)
- add plugin registration log(#2477, @Monokaix)
- Modify format verification by gofmt(#2499, @jiangkaihua)
- scheduler support ephemeral-storage resources(#2505, @WulixuanS)
- delete task qos limit in webhook(#2513, @waiterQ)
- enable https healthz listen(#2523, @waiterQ)
- Use RWMutex in framework(#2525, @kerthcet)
- Realias scheduling api version name in package imports(#2526, @kerthcet)
- Bump ginkgo version to v2.3.0(#2532, @kerthcet)
- upgrade golangci-lint to v1.50.0(#2537, @waiterQ)
- move prefilter out of predicates to improve performance(#2580, @elinx)
- Move spark e2e integration from self-hosted to github-hosted(#2590, @Yikun)
- Add node image information to the cache of the scheduler(#2593, @wangyang0616)
- By default, the preemption function of gang and drf is turned off(#2613, @wangyang0616)
- The referenced Volcano API version is updated to 1.7(#2618, @wangyang0616)
- update image to v1.7.0-beta.0(#2628, @william-wang)
- update image to v1.7.0(#2636, @wangyang0616)
Bug Fixes
- fix: proportion metrics accuracy(#2297, @LY-today)
- fix scheduler cache waitforcachesync(#2307, @xiaoanyunfei)
- To record the start and end time of job scheduling(#2318, @dontan001)
- fix convertQuanToPercent func(#2325, @autumn0207)
- fix defaultMetricsInternal variable(#2326, @autumn0207)
- filter the rescheduling strategies which contain victim functions(#2342, @Thor-wl)
- fix bug in task dependsOn(#2351, @hwdef)
- fix ci error about mpi plugin struct naming is not standardized(#2354, @hwdef)
- try get get old pg when new pg not exist(#2400, @Akiqqqqqqq)
- fix scheduler panic when webhook is not ready(#2410, @hwdef)
- bugfix: panic if queue already exists(#2413, @elinx)
- fix nil pointer in jobCache.update(#2420, @Akiqqqqqqq)
- fix README.md clearly(#2427, @waiterQ)
- Fix calculating available gpu num error(#2441, @kerthcet)
- fix performance downgrade issue(#2443, @wpeng102)
- docs: fix error in how to confi...
v1.6.0
What's New
Support Dynamic Scheduling Based on Real Node Load
This feature aims to schedule pods based on real node load instead of request resource, which will optimize the node resource utilization.Currently the pod is scheduled based on the request resources and node allocatable resources other than the node usage. This leads to the unbalanced resource usage of compute nodes. Pod is scheduled to node with higher usage and lower allocation rate. This is not what users expect. Users expect the usage of each node to be balanced. More details can be referred to https://github.com/volcano-sh/volcano/blob/master/docs/design/usage-based-scheduling.md. (#2023, #2129 @william-wang )
Support Rescheduling Based on Real Node Load
This feature enables users to rebalance the node utilization based on real node resource usage reqularlly, which is quite suitable for long-running workloads such as deployment. All the rescheduling policies and check interval can be configured according to custom scenarios. More details can be referred to https://github.com/volcano-sh/volcano/blob/master/docs/design/rescheduling.md. (#2174, #2184 @Thor-wl )
Support Elastic Job Scheduling
This feature allows Volcano to schedule volcano job based on the [min,max] configuration in the job, which will improve resource utilization rate and shorten the execution time of training job. More details can be referred to https://github.com/volcano-sh/volcano/blob/master/docs/design/elastic-scheduler.md. (#2105, @qiankunli )
Add MPI Job Plugin
This feature provides a new volcano job plugin - MPI Plugin. It will be more convient for MPI users to make use of volcano job instead of manually making connections for hosts of different roles, registering required environment variables and so on. More details can be referred to https://github.com/volcano-sh/volcano/blob/master/docs/design/distributed-framework-plugins.md. (#2237, @hwdef )
Other Notable Changes
- update helm version in install.sh(#2103, @hwdef )
- modify the way to install the controller-gen(#2104, @hwdef )
- add shuffle action(#2174, @Thor-wl )
- add e2e Spark integration test(#2113, @Yikun )
- if only one candidate node, no need do scoring for it(#2122, @wpeng102 )
- skip verify init container SecurityContex.Privileged(#2125, @zrss )
- add design doc for usage based scheduling(#2023, @william-wang )
- add usage based scheduling plugin(#2129, @william-wang )
- support elastic annotation in preempt/reclaim plugin(#2105, @qiankunli )
- add design doc for Enhance-Generate-PodGroup-OwnerReferences-for-Normal-Pod(#2151, @wpeng102 )
- allow no retry when task failed(#2154, @merryzhou )
- remove useless code in task-topology's manager.go(#2159, @HeGaoYuan )
- add user guidance for svc plugin(#2162, @Thor-wl )
- add user guidance of env plugin(#2153, @Thor-wl )
- add user guidance for ssh plugin(#2168, @Thor-wl )
- add user guidance about how to configure volcano scheduler(#2177, @Thor-wl )
- add user guidance about how to configure job and task policy(#2179, @Thor-wl )
- add overhead for pod request(#2170, @jiangxiaobin96 )
- rename ClusterRole from prometheus to prometheus-volcano(#2178, @SimonYang-CS )
- add image pull secret for volcano-admission-init job(#2185, @SimonYang-CS )
- add rescheduling plugin(#2184, @Thor-wl )
- feat(scheduler): support resource quota consideration during pod group enqueue procedure(#1345, @merryzhou )
- add priorityClassName for rescheduler(#2200, @jiangxiaobin96 )
- allow privilege containers to pass the admission webhook validation by default(#2222, @Thor-wl )
- clean up metrics of deleted objects(#2230, @xiaoanyunfei )
- sunset the reservation plugin and elect reserve actions(#2236, @william-wang )
- add more deploy switches on helm(#2267, @shinytang6 )
Bug Fixes
- fix dynamic provision ut case error(#2133, @wpeng102 )
- fix: add jobUID into job's podgroup name ensure podgroup's unique(#2140, @FengXingYuXin)
- fix: Add mirror for Spark voclano IT(#2163, @Yikun )
- fix controller job cache not sync latest version issue(#2169, @wpeng102 )
- fix: add jobUID into job's podgroup name ensure podgroup's unique(#2140, @FengXingYuXin )
- fix task MinAvailable issue(#2176, @merryzhou )
- fix calculate inqueue resource bug in opensession(#2214, @zbbkeepgoing )
- fix id of gpu devices never delete when number gpu decrease(#2215, @WingkaiHo)
- fix numa divided by zero(#2216, @elinx)
- fix helm install(#2218, @zirain )
- fix api-server deny empty admission response with PatchType set(#2267, @elinx)
- feat exclude unhealthy devices(#2267, @YongjiaHe)
- fix unhealthy gpu data struc array(#2267, @YongjiaHe)
- fix high priority task cannot preemt low priority task when queue is overused(#2267, @wpeng102 )
- avoid panic for query prometheus no data(#2267, @waiterQ )
- modify prometheus.query.result judg(#2267, @waiterQ )
- fix(scheduler): fix jobStarvingFn logic(#2271, @shinytang6 )
v1.5.1
Changes since v1.5.0
- bug fix: fix the driver pod can not be created due to unreasonable admit (#2081 @william-wang )
- bug fix: fix error message in TestValidateJobCreate ( #2077 @william-wang )
- bug fix:
Open
state queue can be deleted ( #2077 @Yikun ) - bug fix: upgrade webhook from v1beta1 to v1 to make sure volcano webhook work on K8S 1.22+ ( #2077 @william-wang )
- bug fix: fix the proportion plugin that ignore the inqueue resource in running jobs( #2057 @Thor-wl )
- bug fix: set the initial phase to be pending for podgroup ( #2057 @Thor-wl )
- bug fix: regenerate installer/volcano-development-arm64.yaml to fix arm64 deployment ( #2030 @hwdef )
- bug fix: fix queue allocated exceeds capability ( #2035 @aidaizyy @Thor-wl )
v1.5.0
Changes since v1.5.0-Beta
- bug fix: fix some concurrent map bugs in numaware-aware(#1968, @huone1 @Jason-Liu-Dream )
- bug fix: fix the scheduler stuck after delete resourcequota for namespace(#1978, @william-wang )
- bug fix: add individual development yamls for volcano v1.5(#2004, @hwdef )
v1.4.1
Changes since v1.4.0
- bug fix: fix panic in setNodeState function when node is nil(#1970, @Thor-wl )
- bug fix: fix possible panic when 'SetNode' is called(#1952, @william-wang )
- bug fix: fix some concurrent map bugs in numaware-aware(#1969, @huone1 @Jason-Liu-Dream )
- bug fix: all pods is existing when restart count exceed max retry(#1997, @william-wang )
- bug fix: add individual development yamls for volcano v1.4(#2002, @hwdef )
- bug fix: optimize resource comparision functions for performance(#2026, @huone1 )