From f71287b8476ad524be279b8b04bd2122f1a561c1 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 08:27:14 -0600 Subject: [PATCH] Add more content to the user guide --- README.md | 84 +----------- .../source/_static/images}/comet-overview.png | Bin .../source/_static/images}/comet-plan.png | Bin .../_static/images}/comet-system-diagram.png | Bin docs/source/index.rst | 6 +- docs/source/user-guide/datatypes.md | 41 ++++++ docs/source/user-guide/installation.md | 125 ++++++++++++++++++ docs/source/user-guide/operators.md | 33 +++++ docs/source/user-guide/overview.md | 55 ++++++++ 9 files changed, 262 insertions(+), 82 deletions(-) rename {doc => docs/source/_static/images}/comet-overview.png (100%) rename {doc => docs/source/_static/images}/comet-plan.png (100%) rename {doc => docs/source/_static/images}/comet-system-diagram.png (100%) create mode 100644 docs/source/user-guide/datatypes.md create mode 100644 docs/source/user-guide/installation.md create mode 100644 docs/source/user-guide/operators.md create mode 100644 docs/source/user-guide/overview.md diff --git a/README.md b/README.md index 7d020eb4f..b793a7904 100644 --- a/README.md +++ b/README.md @@ -25,7 +25,7 @@ as native runtime to achieve improvement in terms of query efficiency and query Comet runs Spark SQL queries using the native DataFusion runtime, which is typically faster and more resource efficient than JVM based runtimes. - + Comet aims to support: @@ -39,7 +39,7 @@ Comet aims to support: The following diagram illustrates the architecture of Comet: - + ## Current Status @@ -69,82 +69,4 @@ Linux, Apple OSX (Intel and M1) ## Getting started -Make sure the requirements above are met and software installed on your machine - -### Clone repo - -```commandline -git clone https://github.com/apache/datafusion-comet.git -``` - -### Specify the Spark version and build the Comet - -Spark 3.4 used for the example. - -``` -cd datafusion-comet -make release PROFILES="-Pspark-3.4" -``` - -### Run Spark with Comet enabled - -Make sure `SPARK_HOME` points to the same Spark version as Comet has built for. - -``` -$SPARK_HOME/bin/spark-shell --jars spark/target/comet-spark-spark3.4_2.12-0.1.0-SNAPSHOT.jar \ ---conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \ ---conf spark.comet.enabled=true \ ---conf spark.comet.exec.enabled=true \ ---conf spark.comet.exec.all.enabled=true -``` - -### Verify Comet enabled for Spark SQL query - -Create a test Parquet source - -```scala -scala> (0 until 10).toDF("a").write.mode("overwrite").parquet("/tmp/test") -``` - -Query the data from the test source and check: - -- INFO message shows the native Comet library has been initialized. -- The query plan reflects Comet operators being used for this query instead of Spark ones - -```scala -scala> spark.read.parquet("/tmp/test").createOrReplaceTempView("t1") -scala> spark.sql("select * from t1 where a > 5").explain -INFO src/lib.rs: Comet native library initialized -== Physical Plan == - *(1) ColumnarToRow - +- CometFilter [a#14], (isnotnull(a#14) AND (a#14 > 5)) - +- CometScan parquet [a#14] Batched: true, DataFilters: [isnotnull(a#14), (a#14 > 5)], - Format: CometParquet, Location: InMemoryFileIndex(1 paths)[file:/tmp/test], PartitionFilters: [], - PushedFilters: [IsNotNull(a), GreaterThan(a,5)], ReadSchema: struct -``` - -### Enable Comet shuffle - -Comet shuffle feature is disabled by default. To enable it, please add related configs: - -``` ---conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager ---conf spark.comet.exec.shuffle.enabled=true -``` - -Above configs enable Comet native shuffle which only supports hash partition and single partition. -Comet native shuffle doesn't support complex types yet. - -Comet doesn't have official release yet so currently the only way to test it is to build jar and include it in your Spark application. Depending on your deployment mode you may also need to set the driver & executor class path(s) to explicitly contain Comet otherwise Spark may use a different class-loader for the Comet components than its internal components which will then fail at runtime. For example: - -``` ---driver-class-path spark/target/comet-spark-spark3.4_2.12-0.1.0-SNAPSHOT.jar -``` - -Some cluster managers may require additional configuration, see https://spark.apache.org/docs/latest/cluster-overview.html - -To enable columnar shuffle which supports all partitioning and basic complex types, one more config is required: - -``` ---conf spark.comet.columnar.shuffle.enabled=true -``` +See the [DataFusion Comet User Guide](https://datafusion.apache.org/comet/user-guide/) for installation instructions. diff --git a/doc/comet-overview.png b/docs/source/_static/images/comet-overview.png similarity index 100% rename from doc/comet-overview.png rename to docs/source/_static/images/comet-overview.png diff --git a/doc/comet-plan.png b/docs/source/_static/images/comet-plan.png similarity index 100% rename from doc/comet-plan.png rename to docs/source/_static/images/comet-plan.png diff --git a/doc/comet-system-diagram.png b/docs/source/_static/images/comet-system-diagram.png similarity index 100% rename from doc/comet-system-diagram.png rename to docs/source/_static/images/comet-system-diagram.png diff --git a/docs/source/index.rst b/docs/source/index.rst index 4462a8d87..a19f64233 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -40,8 +40,12 @@ as a native runtime to achieve improvement in terms of query efficiency and quer :maxdepth: 1 :caption: User Guide + Comet Overview + Installing Comet Supported Expressions - user-guide/compatibility + Supported Operators + Supported Data Types + Compatibility Guide .. _toc.links: .. toctree:: diff --git a/docs/source/user-guide/datatypes.md b/docs/source/user-guide/datatypes.md new file mode 100644 index 000000000..02e968a4e --- /dev/null +++ b/docs/source/user-guide/datatypes.md @@ -0,0 +1,41 @@ + + +# Supported Spark Data Types + +The following Spark data types are currently available: + + + +- Primitives + - Boolean + - Byte + - Short + - Integer + - Long + - Float + - Double +- String +- Binary +- Decimal +- Temporal + - Date + - Timestamp + - TimestampNTZ +- Null diff --git a/docs/source/user-guide/installation.md b/docs/source/user-guide/installation.md new file mode 100644 index 000000000..b948d50cb --- /dev/null +++ b/docs/source/user-guide/installation.md @@ -0,0 +1,125 @@ + + +# Installing DataFusion Comet + +Make sure the following requirements are met and software installed on your machine. + +## Supported Platforms + +- Linux +- Apple OSX (Intel and Apple Silicon) + +## Requirements + +- Apache Spark 3.2, 3.3, or 3.4 +- JDK 8 and up +- GLIBC 2.17 (Centos 7) and up + +## Using a Published Release + +There are no public releases available yet, so it is necessary to build from source as described in the next section. + +## Building From Source + +Clone the repository: + +```commandline +git clone https://github.com/apache/datafusion-comet.git +``` + +Build Comet for a specific Spark version: + +```commandline +cd datafusion-comet +make release PROFILES="-Pspark-3.4" +``` + +Note that the project builds for Scala 2.12 by default but can be built for Scala 2.13 using an additional profile: + +```commandline +make release PROFILES="-Pspark-3.4 -Pscala-2.13" +``` + +## Run Spark with Comet enabled + +Make sure `SPARK_HOME` points to the same Spark version as Comet was built for. + +```commandline +$SPARK_HOME/bin/spark-shell \ + --jars spark/target/comet-spark-spark3.4_2.12-0.1.0-SNAPSHOT.jar \ + --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \ + --conf spark.comet.enabled=true \ + --conf spark.comet.exec.enabled=true \ + --conf spark.comet.exec.all.enabled=true +``` + +### Verify Comet enabled for Spark SQL query + +Create a test Parquet source + +```scala +scala> (0 until 10).toDF("a").write.mode("overwrite").parquet("/tmp/test") +``` + +Query the data from the test source and check: + +- INFO message shows the native Comet library has been initialized. +- The query plan reflects Comet operators being used for this query instead of Spark ones + +```scala +scala> spark.read.parquet("/tmp/test").createOrReplaceTempView("t1") +scala> spark.sql("select * from t1 where a > 5").explain +INFO src/lib.rs: Comet native library initialized +== Physical Plan == + *(1) ColumnarToRow + +- CometFilter [a#14], (isnotnull(a#14) AND (a#14 > 5)) + +- CometScan parquet [a#14] Batched: true, DataFilters: [isnotnull(a#14), (a#14 > 5)], + Format: CometParquet, Location: InMemoryFileIndex(1 paths)[file:/tmp/test], PartitionFilters: [], + PushedFilters: [IsNotNull(a), GreaterThan(a,5)], ReadSchema: struct +``` + +### Enable Comet shuffle + +Comet shuffle feature is disabled by default. To enable it, please add related configs: + +``` +--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager +--conf spark.comet.exec.shuffle.enabled=true +``` + +Above configs enable Comet native shuffle which only supports hash partition and single partition. +Comet native shuffle doesn't support complex types yet. + +Comet doesn't have official release yet so currently the only way to test it is to build jar and include it in your +Spark application. Depending on your deployment mode you may also need to set the driver & executor class path(s) to +explicitly contain Comet otherwise Spark may use a different class-loader for the Comet components than its internal +components which will then fail at runtime. For example: + +``` +--driver-class-path spark/target/comet-spark-spark3.4_2.12-0.1.0-SNAPSHOT.jar +``` + +Some cluster managers may require additional configuration, see https://spark.apache.org/docs/latest/cluster-overview.html + +To enable columnar shuffle which supports all partitioning and basic complex types, one more config is required: + +``` +--conf spark.comet.columnar.shuffle.enabled=true +``` diff --git a/docs/source/user-guide/operators.md b/docs/source/user-guide/operators.md new file mode 100644 index 000000000..ec82e9f69 --- /dev/null +++ b/docs/source/user-guide/operators.md @@ -0,0 +1,33 @@ + + +# Supported Spark Operators + +The following Spark operators are currently available: + +- FileSourceScanExec/BatchScanExec for Parquet +- Projection +- Filter +- Sort +- Hash Aggregate +- Limit +- Sort-merge Join +- Hash Join +- Shuffle +- Expand diff --git a/docs/source/user-guide/overview.md b/docs/source/user-guide/overview.md new file mode 100644 index 000000000..ff73176d8 --- /dev/null +++ b/docs/source/user-guide/overview.md @@ -0,0 +1,55 @@ + + +# Comet Overview + +Comet runs Spark SQL queries using the native Apache DataFusion runtime, which is +typically faster and more resource efficient than JVM based runtimes. + +![Comet Overview](../_static/images/comet-overview.png) + +Comet aims to support: + +- a native Parquet implementation, including both reader and writer +- full implementation of Spark operators, including + Filter/Project/Aggregation/Join/Exchange etc. +- full implementation of Spark built-in expressions +- a UDF framework for users to migrate their existing UDF to native + +## Architecture + +The following diagram illustrates the architecture of Comet: + +![Comet System Diagram](../_static/images/comet-system-diagram.png) + +## Current Status + +The project is currently integrated into Apache Spark 3.2, 3.3, and 3.4. + +## Feature Parity with Apache Spark + +The project strives to keep feature parity with Apache Spark, that is, +users should expect the same behavior (w.r.t features, configurations, +query results, etc) with Comet turned on or turned off in their Spark +jobs. In addition, Comet extension should automatically detect unsupported +features and fallback to Spark engine. + +To achieve this, besides unit tests within Comet itself, we also re-use +Spark SQL tests and make sure they all pass with Comet extension +enabled.