Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Implement Spark unhex #342

Merged
merged 20 commits into from
May 9, 2024
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion common/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,8 @@ under the License.
</goals>
<configuration>
<sources>
<source>src/main/${shims.source}</source>
<source>src/main/${shims.majorSource}</source>
<source>src/main/${shims.minorSource}</source>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the minor version shims. These are going to help me with some of work around supporting cast.

</sources>
</configuration>
</execution>
Expand Down
14 changes: 10 additions & 4 deletions core/src/execution/datafusion/expressions/scalar_funcs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ use num::{
};
use unicode_segmentation::UnicodeSegmentation;

mod unhex;
use unhex::spark_unhex;

macro_rules! make_comet_scalar_udf {
($name:expr, $func:ident, $data_type:ident) => {{
let scalar_func = CometScalarFunction::new(
Expand Down Expand Up @@ -105,6 +108,10 @@ pub fn create_comet_physical_fun(
"make_decimal" => {
make_comet_scalar_udf!("make_decimal", spark_make_decimal, data_type)
}
"unhex" => {
let func = Arc::new(spark_unhex);
make_comet_scalar_udf!("unhex", func, without data_type)
}
"decimal_div" => {
make_comet_scalar_udf!("decimal_div", spark_decimal_div, data_type)
}
Expand All @@ -123,11 +130,10 @@ pub fn create_comet_physical_fun(
make_comet_scalar_udf!(spark_func_name, wrapped_func, without data_type)
}
_ => {
let fun = BuiltinScalarFunction::from_str(fun_name);
if fun.is_err() {
Ok(ScalarFunctionDefinition::UDF(registry.udf(fun_name)?))
if let Ok(fun) = BuiltinScalarFunction::from_str(fun_name) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated, but more idiomatic IMO

Ok(ScalarFunctionDefinition::BuiltIn(fun))
} else {
Ok(ScalarFunctionDefinition::BuiltIn(fun?))
Ok(ScalarFunctionDefinition::UDF(registry.udf(fun_name)?))
}
}
}
Expand Down
149 changes: 149 additions & 0 deletions core/src/execution/datafusion/expressions/scalar_funcs/unhex.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
// Licensed to the Apache Software Foundation (ASF) under one
// or more contributor license agreements. See the NOTICE file
// distributed with this work for additional information
// regarding copyright ownership. The ASF licenses this file
// to you under the Apache License, Version 2.0 (the
// "License"); you may not use this file except in compliance
// with the License. You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing,
// software distributed under the License is distributed on an
// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
// KIND, either express or implied. See the License for the
// specific language governing permissions and limitations
// under the License.

use std::sync::Arc;

use arrow_array::{Array, OffsetSizeTrait};
use arrow_schema::DataType;
use datafusion::logical_expr::ColumnarValue;
use datafusion_common::{cast::as_generic_string_array, exec_err, DataFusionError, ScalarValue};

fn unhex(string: &str, result: &mut Vec<u8>) -> Result<(), DataFusionError> {
if string.is_empty() {
return Ok(());
}

// Adjust the string if it has an odd length, and prepare to add a padding byte if needed.
let needs_padding = string.len() % 2 != 0;
let adjusted_string = if needs_padding { &string[1..] } else { string };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand this correctly, string[0] is discarded when the length is odd, is it intentional?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the logic in Spark 3.4.2 for handling the first char if the input is padded, for reference. It looks like there is some validation of the first digit that we do not have in this PR and it also looks like the unhexed digit is stored in the output is used in the return value if the length of the input string is 1. It would be good to make sure that we have tests covering this case.

    if ((bytes.length & 0x01) != 0) {
      // padding with '0'
      if (bytes(0) < 0) {
        return null
      }
      val v = Hex.unhexDigits(bytes(0))
      if (v == -1) {
        return null
      }
      out(0) = v
      i += 1
      oddShift = 1
    }


let mut iter = adjusted_string.chars().peekable();
while let (Some(high_char), Some(low_char)) = (iter.next(), iter.next()) {
let high = high_char
.to_digit(16)
.ok_or_else(|| DataFusionError::Internal("Invalid hex character".to_string()))?;
let low = low_char
.to_digit(16)
.ok_or_else(|| DataFusionError::Internal("Invalid hex character".to_string()))?;

result.push((high << 4 | low) as u8);
}

if needs_padding {
result.push(0);
}

Ok(())
}

fn spark_unhex_inner<T: OffsetSizeTrait>(
array: &ColumnarValue,
fail_on_error: bool,
) -> Result<ColumnarValue, DataFusionError> {
match array {
ColumnarValue::Array(array) => {
let string_array = as_generic_string_array::<T>(array)?;

let mut builder = arrow::array::BinaryBuilder::new();
let mut encoded = Vec::new();

for i in 0..string_array.len() {
let string = string_array.value(i);

if unhex(string, &mut encoded).is_ok() {
builder.append_value(encoded.as_slice());
encoded.clear();
} else if fail_on_error {
return exec_err!("Input to unhex is not a valid hex string: {string}");
} else {
builder.append_null();
}
}
Ok(ColumnarValue::Array(Arc::new(builder.finish())))
}
ColumnarValue::Scalar(ScalarValue::Utf8(Some(string))) => {
let mut encoded = Vec::new();

if unhex(string, &mut encoded).is_ok() {
Ok(ColumnarValue::Scalar(ScalarValue::Binary(Some(encoded))))
} else if fail_on_error {
exec_err!("Input to unhex is not a valid hex string: {string}")
} else {
Ok(ColumnarValue::Scalar(ScalarValue::Binary(None)))
}
}
_ => {
exec_err!(
"The first argument must be a string scalar or array, but got: {:?}",
array
)
}
}
}

pub(super) fn spark_unhex(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {
if args.len() > 2 {
return exec_err!("unhex takes at most 2 arguments, but got: {}", args.len());
}

let val_to_unhex = &args[0];
let fail_on_error = if args.len() == 2 {
match &args[1] {
ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error))) => *fail_on_error,
_ => {
return exec_err!(
"The second argument must be boolean scalar, but got: {:?}",
args[1]
);
}
}
} else {
false
};

match val_to_unhex.data_type() {
DataType::Utf8 => spark_unhex_inner::<i32>(val_to_unhex, fail_on_error),
DataType::LargeUtf8 => spark_unhex_inner::<i64>(val_to_unhex, fail_on_error),
other => exec_err!(
"The first argument must be a string scalar or array, but got: {:?}",
other
),
}
}

#[cfg(test)]
mod test {
use super::unhex;

#[test]
fn test_unhex() -> Result<(), Box<dyn std::error::Error>> {
let mut result = Vec::new();

unhex("537061726B2053514C", &mut result)?;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we also have a test for the case where the input is padded?

let result_str = std::str::from_utf8(&result)?;
assert_eq!(result_str, "Spark SQL");
result.clear();

assert!(unhex("hello", &mut result).is_err());
result.clear();

unhex("", &mut result)?;
assert!(result.is_empty());

Ok(())
}
}
10 changes: 6 additions & 4 deletions core/src/execution/datafusion/planner.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1301,24 +1301,26 @@ impl PhysicalPlanner {
.iter()
.map(|x| x.data_type(input_schema.as_ref()))
.collect::<Result<Vec<_>, _>>()?;

let data_type = match expr.return_type.as_ref().map(to_arrow_datatype) {
Some(t) => t,
None => {
// If no data type is provided from Spark, we'll use DF's return type from the
// scalar function
// Note this assumes the `fun_name` is a defined function in DF. Otherwise, it'll
// throw error.
let fun = BuiltinScalarFunction::from_str(fun_name);
if fun.is_err() {

if let Ok(fun) = BuiltinScalarFunction::from_str(fun_name) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated, but more idiomatic IMO

fun.return_type(&input_expr_types)?
} else {
self.session_ctx
.udf(fun_name)?
.inner()
.return_type(&input_expr_types)?
} else {
fun?.return_type(&input_expr_types)?
}
}
};

let fun_expr =
create_comet_physical_fun(fun_name, data_type.clone(), &self.session_ctx.state())?;

Expand Down
6 changes: 5 additions & 1 deletion pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,8 @@ under the License.
<argLine>-ea -Xmx4g -Xss4m ${extraJavaTestArgs}</argLine>
<additional.3_3.test.source>spark-3.3-plus</additional.3_3.test.source>
<additional.3_4.test.source>spark-3.4</additional.3_4.test.source>
<shims.source>spark-3.x</shims.source>
<shims.majorSource>spark-3.x</shims.majorSource>
<shims.minorSource>spark-3.4</shims.minorSource>
tshauck marked this conversation as resolved.
Show resolved Hide resolved
</properties>

<dependencyManagement>
Expand Down Expand Up @@ -500,6 +501,7 @@ under the License.
<!-- we don't add special test suits for spark-3.2, so a not existed dir is specified-->
<additional.3_3.test.source>not-needed-yet</additional.3_3.test.source>
<additional.3_4.test.source>not-needed-yet</additional.3_4.test.source>
<shims.minorSource>spark-3.2</shims.minorSource>
</properties>
</profile>

Expand All @@ -512,6 +514,7 @@ under the License.
<parquet.version>1.12.0</parquet.version>
<additional.3_3.test.source>spark-3.3-plus</additional.3_3.test.source>
<additional.3_4.test.source>not-needed-yet</additional.3_4.test.source>
<shims.minorSource>spark-3.3</shims.minorSource>
</properties>
</profile>

Expand All @@ -523,6 +526,7 @@ under the License.
<parquet.version>1.13.1</parquet.version>
<additional.3_3.test.source>spark-3.3-plus</additional.3_3.test.source>
<additional.3_4.test.source>spark-3.4</additional.3_4.test.source>
<shims.minorSource>spark-3.4</shims.minorSource>
</properties>
</profile>

Expand Down
3 changes: 2 additions & 1 deletion spark/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,8 @@ under the License.
</goals>
<configuration>
<sources>
<source>src/main/${shims.source}</source>
<source>src/main/${shims.majorSource}</source>
<source>src/main/${shims.minorSource}</source>
</sources>
</configuration>
</execution>
Expand Down
13 changes: 12 additions & 1 deletion spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,13 @@ import org.apache.comet.CometSparkSessionExtensions.{isCometOperatorEnabled, isC
import org.apache.comet.serde.ExprOuterClass.{AggExpr, DataType => ProtoDataType, Expr, ScalarFunc}
import org.apache.comet.serde.ExprOuterClass.DataType.{DataTypeInfo, DecimalInfo, ListInfo, MapInfo, StructInfo}
import org.apache.comet.serde.OperatorOuterClass.{AggregateMode => CometAggregateMode, JoinType, Operator}
import org.apache.comet.shims.ShimCometExpr
import org.apache.comet.shims.ShimQueryPlanSerde

/**
* An utility object for query plan and expression serialization.
*/
object QueryPlanSerde extends Logging with ShimQueryPlanSerde {
object QueryPlanSerde extends Logging with ShimQueryPlanSerde with ShimCometExpr {
def emitWarning(reason: String): Unit = {
logWarning(s"Comet native execution is disabled due to: $reason")
}
Expand Down Expand Up @@ -1396,6 +1397,16 @@ object QueryPlanSerde extends Logging with ShimQueryPlanSerde {
val optExpr = scalarExprToProto("atan2", leftExpr, rightExpr)
optExprWithInfo(optExpr, expr, left, right)

case e: Unhex =>
val unHex = unhexSerde(e)

val childExpr = exprToProtoInternal(unHex._1, inputs)
val failOnErrorExpr = exprToProtoInternal(unHex._2, inputs)

val optExpr =
scalarExprToProtoWithReturnType("unhex", e.dataType, childExpr, failOnErrorExpr)
optExprWithInfo(optExpr, expr, unHex._1)

case e @ Ceil(child) =>
val childExpr = exprToProtoInternal(child, inputs)
child.dataType match {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.comet.shims

import org.apache.spark.sql.catalyst.expressions._

/**
* `ShimCometExpr` parses the `Unhex` expression assuming that the catalyst version is 3.2.x.
tshauck marked this conversation as resolved.
Show resolved Hide resolved
*/
trait ShimCometExpr {
tshauck marked this conversation as resolved.
Show resolved Hide resolved
def unhexSerde(unhex: Unhex): (Expression, Expression) = {
(unhex.child, Literal(false))
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.comet.shims

import org.apache.spark.sql.catalyst.expressions._

/**
* `ShimCometExpr` parses the `Unhex` expression assuming that the catalyst version is 3.3.x.
*/
trait ShimCometExpr {
def unhexSerde(unhex: Unhex): (Expression, Expression) = {
(unhex.child, Literal(false))
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.comet.shims

import org.apache.spark.sql.catalyst.expressions._

/**
* `ShimCometExpr` parses the `Unhex` expression assuming that the catalyst version is 3.4.x.
*/
trait ShimCometExpr {
def unhexSerde(unhex: Unhex): (Expression, Expression) = {
(unhex.child, Literal(unhex.failOnError))
}
}
Loading
Loading