Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add benchmarking guide #444

Merged
merged 2 commits into from
May 17, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 43 additions & 0 deletions docs/source/contributor-guide/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Comet Benchmarking Guide
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

license header?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Looks like we have no CI check for that for docs changes. I will look into that


To track progress on performance, we regularly run benchmarks derived from TPC-H and TPC-DS. Benchmarking scripts are
available in the [DataFusion Benchmarks](https://github.com/apache/datafusion-benchmarks) GitHub repository.

Here is an example command for running the benchmarks. This command will need to be adapted based on the Spark
environment and location of data files.

This command assumes that `datafusion-benchmarks` is checked out in a parallel directory to `datafusion-comet`.

```shell
$SPARK_HOME/bin/spark-submit \
--master "local[*]" \
--conf spark.driver.memory=8G \
--conf spark.executor.memory=64G \
--conf spark.executor.cores=16 \
--conf spark.cores.max=16 \
--conf spark.eventLog.enabled=true \
--conf spark.sql.autoBroadcastJoinThreshold=-1 \
--jars $COMET_JAR \
--conf spark.driver.extraClassPath=$COMET_JAR \
--conf spark.executor.extraClassPath=$COMET_JAR \
--conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \
--conf spark.comet.enabled=true \
--conf spark.comet.exec.enabled=true \
--conf spark.comet.exec.all.enabled=true \
--conf spark.comet.cast.allowIncompatible=true \
--conf spark.comet.explainFallback.enabled=true \
--conf spark.comet.parquet.io.enabled=false \
--conf spark.comet.batchSize=8192 \
--conf spark.comet.columnar.shuffle.enabled=false \
--conf spark.comet.exec.shuffle.enabled=true \
--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \
--conf spark.sql.adaptive.coalescePartitions.enabled=false \
--conf spark.comet.shuffle.enforceMode.enabled=true \
../datafusion-benchmarks/runners/datafusion-comet/tpcbench.py \
--benchmark tpch \
--data /mnt/bigdata/tpch/sf100-parquet/ \
--queries ../datafusion-benchmarks/tpch/queries
```

Comet performance can be compared to regular Spark performance by running the benchmark twice, once with
`spark.comet.enabled` set to `true` and once with it set to `false`.
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ as a native runtime to achieve improvement in terms of query efficiency and quer
Comet Plugin Overview <contributor-guide/plugin_overview>
Development Guide <contributor-guide/development>
Debugging Guide <contributor-guide/debugging>
Benchmarking Guide <contributor-guide/benchmarking>
Profiling Native Code <contributor-guide/profiling_native_code>
Github and Issue Tracker <https://github.com/apache/datafusion-comet>

Expand Down