From 36687598f0b908d071029855ddfaa0be8d72f44e Mon Sep 17 00:00:00 2001 From: Trent Hauck Date: Sat, 18 May 2024 13:36:14 -0700 Subject: [PATCH] docs: clarify language about `isSpark32` --- docs/source/contributor-guide/adding_a_new_expression.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/contributor-guide/adding_a_new_expression.md b/docs/source/contributor-guide/adding_a_new_expression.md index 30b4c376f4..6cf10c7586 100644 --- a/docs/source/contributor-guide/adding_a_new_expression.md +++ b/docs/source/contributor-guide/adding_a_new_expression.md @@ -59,7 +59,7 @@ case e: Unhex if !isSpark32 => A few things to note here: -* The `isSpark32` check is used to fall back to Spark's implementation of `unhex` in Spark 3.2, as only versions after that have the `failOnError` parameter. +* The `isSpark32` check is used to fall back to Spark's implementation of `unhex` in Spark 3.2. This is somewhat context specific, because in this case, due to a bug in Spark 3.2 for `unhex`, we want to use the Spark implementation and not a Comet implementation that would behave differently if correct. * The function is recursively called on child expressions, so you'll need to make sure that the child expressions are also converted to protobuf. * `scalarExprToProtoWithReturnType` is for scalar functions that need return type information. Your expression may use a different method depending on the type of expression.