diff --git a/EXPRESSIONS.md b/EXPRESSIONS.md deleted file mode 100644 index f0a2f6955..000000000 --- a/EXPRESSIONS.md +++ /dev/null @@ -1,109 +0,0 @@ - - -# Expressions Supported by Comet - -The following Spark expressions are currently available: - -+ Literals -+ Arithmetic Operators - + UnaryMinus - + Add/Minus/Multiply/Divide/Remainder -+ Conditional functions - + Case When - + If -+ Cast -+ Coalesce -+ BloomFilterMightContain -+ Boolean functions - + And - + Or - + Not - + EqualTo - + EqualNullSafe - + GreaterThan - + GreaterThanOrEqual - + LessThan - + LessThanOrEqual - + IsNull - + IsNotNull - + In -+ String functions - + Substring - + Coalesce - + StringSpace - + Like - + Contains - + Startswith - + Endswith - + Ascii - + Bit_length - + Octet_length - + Upper - + Lower - + Chr - + Initcap - + Trim/Btrim/Ltrim/Rtrim - + Concat_ws - + Repeat - + Length - + Reverse - + Instr - + Replace - + Translate -+ Bitwise functions - + Shiftright/Shiftleft -+ Date/Time functions - + Year/Hour/Minute/Second -+ Math functions - + Abs - + Acos - + Asin - + Atan - + Atan2 - + Cos - + Exp - + Ln - + Log10 - + Log2 - + Pow - + Round - + Signum - + Sin - + Sqrt - + Tan - + Ceil - + Floor -+ Aggregate functions - + Count - + Sum - + Max - + Min - + Avg - + First - + Last - + BitAnd - + BitOr - + BitXor - + BoolAnd - + BoolOr - + CovPopulation - + CovSample - + VariancePop - + VarianceSamp diff --git a/docs/source/contributing.md b/docs/source/contributor-guide/contributing.md similarity index 97% rename from docs/source/contributing.md rename to docs/source/contributor-guide/contributing.md index 22626925e..d33d42ba3 100644 --- a/docs/source/contributing.md +++ b/docs/source/contributor-guide/contributing.md @@ -32,6 +32,8 @@ Here are some areas where you can help: We maintain a list of good first issues in GitHub [here](https://github.com/apache/datafusion-comet/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). +The list of currently supported Spark expressions can be found at + ## Reporting issues We use [GitHub issues](https://github.com/apache/datafusion-comet/issues) for bug reports and feature requests. diff --git a/DEBUGGING.md b/docs/source/contributor-guide/debugging.md similarity index 87% rename from DEBUGGING.md rename to docs/source/contributor-guide/debugging.md index 754316ad5..3b20ed0b2 100644 --- a/DEBUGGING.md +++ b/docs/source/contributor-guide/debugging.md @@ -20,12 +20,13 @@ under the License. # Comet Debugging Guide This HOWTO describes how to debug JVM code and Native code concurrently. The guide assumes you have: + 1. Intellij as the Java IDE 2. CLion as the Native IDE. For Rust code, the CLion Rust language plugin is required. Note that the -Intellij Rust plugin is not sufficient. + Intellij Rust plugin is not sufficient. 3. CLion/LLDB as the native debugger. CLion ships with a bundled LLDB and the Rust community has -its own packaging of LLDB (`lldb-rust`). Both provide a better display of Rust symbols than plain -LLDB or the LLDB that is bundled with XCode. We will use the LLDB packaged with CLion for this guide. + its own packaging of LLDB (`lldb-rust`). Both provide a better display of Rust symbols than plain + LLDB or the LLDB that is bundled with XCode. We will use the LLDB packaged with CLion for this guide. 4. We will use a Comet _unit_ test as the canonical use case. _Caveat: The steps here have only been tested with JDK 11_ on Mac (M1) @@ -42,21 +43,24 @@ use advanced `lldb` debugging. 1. Add a Debug Configuration for the unit test 1. In the Debug Configuration for that unit test add `-Xint` as a JVM parameter. This option is -undocumented *magic*. Without this, the LLDB debugger hits a EXC_BAD_ACCESS (or EXC_BAD_INSTRUCTION) from -which one cannot recover. + undocumented _magic_. Without this, the LLDB debugger hits a EXC_BAD_ACCESS (or EXC_BAD_INSTRUCTION) from + which one cannot recover. + +1. Add a println to the unit test to print the PID of the JVM process. (jps can also be used but this is less error prone if you have multiple jvm processes running) + + ```JDK8 + println("Waiting for Debugger: PID - ", ManagementFactory.getRuntimeMXBean().getName()) + ``` + + This will print something like : `PID@your_machine_name`. -1. Add a println to the unit test to print the PID of the JVM process. (jps can also be used but this is less error prone if you have multiple jvm processes running) - ``` JDK8 - println("Waiting for Debugger: PID - ", ManagementFactory.getRuntimeMXBean().getName()) - ``` - This will print something like : `PID@your_machine_name`. + For JDK9 and newer - For JDK9 and newer - ```JDK9 - println("Waiting for Debugger: PID - ", ProcessHandle.current.pid) - ``` + ```JDK9 + println("Waiting for Debugger: PID - ", ProcessHandle.current.pid) + ``` - ==> Note the PID + ==> Note the PID 1. Debug-run the test in Intellij and wait for the breakpoint to be hit @@ -96,7 +100,8 @@ Detecting the debugger https://stackoverflow.com/questions/5393403/can-a-java-application-detect-that-a-debugger-is-attached#:~:text=No.,to%20let%20your%20app%20continue.&text=I%20know%20that%20those%20are,meant%20with%20my%20first%20phrase). # Verbose debug -By default, Comet outputs the exception details specific for Comet. + +By default, Comet outputs the exception details specific for Comet. ```scala scala> spark.sql("my_failing_query").show(false) @@ -112,7 +117,7 @@ This was likely caused by a bug in DataFusion's code and we would welcome that y ``` There is a verbose exception option by leveraging DataFusion [backtraces](https://arrow.apache.org/datafusion/user-guide/example-usage.html#enable-backtraces) -This option allows to append native DataFusion stacktrace to the original error message. +This option allows to append native DataFusion stacktrace to the original error message. To enable this option with Comet it is needed to include `backtrace` feature in [Cargo.toml](https://github.com/apache/arrow-datafusion-comet/blob/main/core/Cargo.toml) for DataFusion dependencies ``` @@ -129,15 +134,16 @@ RUST_BACKTRACE=1 $SPARK_HOME/spark-shell --jars spark/target/comet-spark-spark3. ``` Get the expanded exception details + ```scala scala> spark.sql("my_failing_query").show(false) 24/03/05 17:00:49 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) org.apache.comet.CometNativeException: Internal error: MIN/MAX is not expected to receive scalars of incompatible types (Date32("NULL"), Int32(15901)) -backtrace: +backtrace: 0: std::backtrace::Backtrace::create 1: datafusion_physical_expr::aggregate::min_max::min - 2: ::update_batch + 2: ::update_batch 3: as futures_core::stream::Stream>::poll_next 4: comet::execution::jni_api::Java_org_apache_comet_Native_executePlan::{{closure}} 5: _Java_org_apache_comet_Native_executePlan @@ -151,6 +157,8 @@ at org.apache.comet.CometExecIterator.hasNext(CometExecIterator.scala:126) (reduced) ``` + Note: + - The backtrace coverage in DataFusion is still improving. So there is a chance the error still not covered, if so feel free to file a [ticket](https://github.com/apache/arrow-datafusion/issues) - The backtrace evaluation comes with performance cost and intended mostly for debugging purposes diff --git a/DEVELOPMENT.md b/docs/source/contributor-guide/development.md similarity index 92% rename from DEVELOPMENT.md rename to docs/source/contributor-guide/development.md index 6dc0f1f23..63146c191 100644 --- a/DEVELOPMENT.md +++ b/docs/source/contributor-guide/development.md @@ -49,25 +49,29 @@ A few common commands are specified in project's `Makefile`: - `make clean`: clean up the workspace - `bin/comet-spark-shell -d . -o spark/target/` run Comet spark shell for V1 datasources - `bin/comet-spark-shell -d . -o spark/target/ --conf spark.sql.sources.useV1SourceList=""` run Comet spark shell for V2 datasources - + ## Development Environment + Comet is a multi-language project with native code written in Rust and JVM code written in Java and Scala. -For Rust code, the CLion IDE is recommended. For JVM code, IntelliJ IDEA is recommended. +For Rust code, the CLion IDE is recommended. For JVM code, IntelliJ IDEA is recommended. Before opening the project in an IDE, make sure to run `make` first to generate the necessary files for the IDEs. Currently, it's mostly about generating protobuf message classes for the JVM side. It's only required to run `make` once after cloning the repo. ### IntelliJ IDEA -First make sure to install the Scala plugin in IntelliJ IDEA. + +First make sure to install the Scala plugin in IntelliJ IDEA. After that, you can open the project in IntelliJ IDEA. The IDE should automatically detect the project structure and import as a Maven project. ### CLion + First make sure to install the Rust plugin in CLion or you can use the dedicated Rust IDE: RustRover. After that you can open the project in CLion. The IDE should automatically detect the project structure and import as a Cargo project. ### Running Tests in IDEA + Like other Maven projects, you can run tests in IntelliJ IDEA by right-clicking on the test class or test method and selecting "Run" or "Debug". -However if the tests is related to the native side. Please make sure to run `make core` or `cd core && cargo build` before running the tests in IDEA. +However if the tests is related to the native side. Please make sure to run `make core` or `cd core && cargo build` before running the tests in IDEA. ## Benchmark @@ -82,9 +86,11 @@ To run TPC-H or TPC-DS micro benchmarks, please follow the instructions in the respective source code, e.g., `CometTPCHQueryBenchmark`. ## Debugging + Comet is a multi-language project with native code written in Rust and JVM code written in Java and Scala. -It is possible to debug both native and JVM code concurrently as described in the [DEBUGGING guide](DEBUGGING.md) +It is possible to debug both native and JVM code concurrently as described in the [DEBUGGING guide](debugging) ## Submitting a Pull Request -Comet uses `cargo fmt`, [Scalafix](https://github.com/scalacenter/scalafix) and [Spotless](https://github.com/diffplug/spotless/tree/main/plugin-maven) to -automatically format the code. Before submitting a pull request, you can simply run `make format` to format the code. \ No newline at end of file + +Comet uses `cargo fmt`, [Scalafix](https://github.com/scalacenter/scalafix) and [Spotless](https://github.com/diffplug/spotless/tree/main/plugin-maven) to +automatically format the code. Before submitting a pull request, you can simply run `make format` to format the code. diff --git a/docs/source/index.rst b/docs/source/index.rst index 6691bcd26..4462a8d87 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -35,25 +35,23 @@ Apache DataFusion Comet Apache DataFusion Comet is an Apache Spark plugin that uses Apache DataFusion as a native runtime to achieve improvement in terms of query efficiency and query runtime. -This documentation site is currently being developed. The most up-to-date documentation can be found in the -GitHub repository at https://github.com/apache/datafusion-comet. - -If you would like to contribute to Comet, please see our contributing guide. - .. _toc.links: .. toctree:: :maxdepth: 1 - :caption: Contributor Guide + :caption: User Guide - Contributing - Github and Issue Tracker + Supported Expressions + user-guide/compatibility .. _toc.links: .. toctree:: :maxdepth: 1 - :caption: User Guide + :caption: Contributor Guide - compatibility + Getting Started + Github and Issue Tracker + contributor-guide/development + contributor-guide/debugging .. _toc.asf-links: .. toctree:: diff --git a/docs/source/compatibility.md b/docs/source/user-guide/compatibility.md similarity index 97% rename from docs/source/compatibility.md rename to docs/source/user-guide/compatibility.md index 6e69f84c5..d817ba5b6 100644 --- a/docs/source/compatibility.md +++ b/docs/source/user-guide/compatibility.md @@ -34,7 +34,7 @@ There is an [epic](https://github.com/apache/datafusion-comet/issues/313) where ## Cast -Comet currently delegates to Apache DataFusion for most cast operations, and this means that the behavior is not +Comet currently delegates to Apache DataFusion for most cast operations, and this means that the behavior is not guaranteed to be consistent with Spark. There is an [epic](https://github.com/apache/datafusion-comet/issues/286) where we are tracking the work to implement Spark-compatible cast expressions. diff --git a/docs/source/user-guide/expressions.md b/docs/source/user-guide/expressions.md new file mode 100644 index 000000000..f67a4eada --- /dev/null +++ b/docs/source/user-guide/expressions.md @@ -0,0 +1,109 @@ + + +# Supported Spark Expressions + +The following Spark expressions are currently available: + +- Literals +- Arithmetic Operators + - UnaryMinus + - Add/Minus/Multiply/Divide/Remainder +- Conditional functions + - Case When + - If +- Cast +- Coalesce +- BloomFilterMightContain +- Boolean functions + - And + - Or + - Not + - EqualTo + - EqualNullSafe + - GreaterThan + - GreaterThanOrEqual + - LessThan + - LessThanOrEqual + - IsNull + - IsNotNull + - In +- String functions + - Substring + - Coalesce + - StringSpace + - Like + - Contains + - Startswith + - Endswith + - Ascii + - Bit_length + - Octet_length + - Upper + - Lower + - Chr + - Initcap + - Trim/Btrim/Ltrim/Rtrim + - Concat_ws + - Repeat + - Length + - Reverse + - Instr + - Replace + - Translate +- Bitwise functions + - Shiftright/Shiftleft +- Date/Time functions + - Year/Hour/Minute/Second +- Math functions + - Abs + - Acos + - Asin + - Atan + - Atan2 + - Cos + - Exp + - Ln + - Log10 + - Log2 + - Pow + - Round + - Signum + - Sin + - Sqrt + - Tan + - Ceil + - Floor +- Aggregate functions + - Count + - Sum + - Max + - Min + - Avg + - First + - Last + - BitAnd + - BitOr + - BitXor + - BoolAnd + - BoolOr + - CovPopulation + - CovSample + - VariancePop + - VarianceSamp