From 30c37583df2b9b7ad4d15477a06ed7c08399ef77 Mon Sep 17 00:00:00 2001 From: Trent Hauck Date: Thu, 16 May 2024 20:56:14 -0700 Subject: [PATCH] docs: fix header level --- docs/source/contributor-guide/adding_a_new_expression.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/contributor-guide/adding_a_new_expression.md b/docs/source/contributor-guide/adding_a_new_expression.md index 8cc4348099..1112568069 100644 --- a/docs/source/contributor-guide/adding_a_new_expression.md +++ b/docs/source/contributor-guide/adding_a_new_expression.md @@ -165,14 +165,14 @@ pub(super) fn spark_unhex(args: &[ColumnarValue]) -> Result **_NOTE:_** If you call the `make_comet_scalar_udf` macro with the data type, the function signature will look include the data type as a second argument. -## API Differences Between Spark Versions +### API Differences Between Spark Versions If the expression you're adding has different behavior across different Spark versions, you'll need to account for that in your implementation. There are two tools at your disposal to help with this: 1. Shims that exist in `spark/src/main/spark-$SPARK_VERSION/org/apache/comet/shims/CometExprShim.scala` for each Spark version. These shims are used to provide compatibility between different Spark versions. 2. Variables that correspond to the Spark version, such as `isSpark32`, which can be used to conditionally execute code based on the Spark version. -## Shimming +## Shimming to Support Different Spark Versions By adding shims for each Spark version, you can provide a consistent interface for the expression across different Spark versions. For example, `unhex` added a new optional parameter is Spark 3.4, for if it should `failOnError` or not. So for version 3.2 and 3.3, the shim is: