[SPARK-50417] Make number of FallbackStorage sub-directories configurable #48960
+96
−13
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
This adds option
spark.storage.decommission.fallbackStorage.subPaths
that allows to control the maximum number of directories created by theFallbackStorage
per shuffle.Why are the changes needed?
The current implementation creates a directory foreach shuffle file that is being copied. For instance, a 100,000 partition shuffle creates 100,000 directories, each containing a single file. While S3 is very happy about this strategy, other filesystem might unnecessarily struggle. Some control about the upper limit of directories is useful.
Does this PR introduce any user-facing change?
Adds option
spark.storage.decommission.fallbackStorage.subPaths
.How was this patch tested?
Added unit tests.
Was this patch authored or co-authored using generative AI tooling?
No