From ebb8bcdd6dd78ba3f5a5324a72d44eb23d6bbf37 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 10:10:33 -0600 Subject: [PATCH 1/7] initial config doc --- docs/source/index.rst | 1 + docs/source/user-guide/configs.md | 58 +++++++++++++++++++++++++++++++ 2 files changed, 59 insertions(+) create mode 100644 docs/source/user-guide/configs.md diff --git a/docs/source/index.rst b/docs/source/index.rst index 4462a8d87..c428a7497 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -41,6 +41,7 @@ as a native runtime to achieve improvement in terms of query efficiency and quer :caption: User Guide Supported Expressions + user-guide/configs user-guide/compatibility .. _toc.links: diff --git a/docs/source/user-guide/configs.md b/docs/source/user-guide/configs.md new file mode 100644 index 000000000..f330e2aed --- /dev/null +++ b/docs/source/user-guide/configs.md @@ -0,0 +1,58 @@ + + +# Comet Configuration Guide + + +| Config | Description | Default Value | +|--------|-------------|---------------| +| spark.comet.ansi.enabled | Comet does not respect ANSI mode in most cases and by default will not accelerate queries when ansi mode is enabled. Enable this setting to test Comet's experimental support for ANSI mode. This should not be used in production. | false | +| spark.comet.batchSize | The columnar batch size, i.e., the maximum number of rows that a batch can contain. | 8192 | +| spark.comet.cast.stringToTimestamp | Comet is not currently fully compatible with Spark when casting from String to Timestamp. | false | +| spark.comet.columnar.shuffle.async.enabled | Whether to enable asynchronous shuffle for Arrow-based shuffle. By default, this config is false. | false | +| spark.comet.columnar.shuffle.async.max.thread.num | Maximum number of threads on an executor used for Comet async columnar shuffle. By default, this config is 100. This is the upper bound of total number of shuffle threads per executor. In other words, if the number of cores * the number of shuffle threads per task `spark.comet.columnar.shuffle.async.thread.num` is larger than this config. Comet will use this config as the number of shuffle threads per executor instead. | 100 | +| spark.comet.columnar.shuffle.async.thread.num | Number of threads used for Comet async columnar shuffle per shuffle task. By default, this config is 3. Note that more threads means more memory requirement to buffer shuffle data before flushing to disk. Also, more threads may not always improve performance, and should be set based on the number of cores available. | 3 | +| spark.comet.columnar.shuffle.batch.size | Batch size when writing out sorted spill files on the native side. Note that this should not be larger than batch size (i.e., `spark.comet.batchSize`). Otherwise it will produce larger batches than expected in the native operator after shuffle. | 8192 | +| spark.comet.columnar.shuffle.enabled | Force Comet to only use columnar shuffle for CometScan and Spark regular operators. If this is enabled, Comet native shuffle will not be enabled but only Arrow shuffle. By default, this config is false. | false | +| spark.comet.columnar.shuffle.memory.factor | Fraction of Comet memory to be allocated per executor process for Comet shuffle. Comet memory size is specified by `spark.comet.memoryOverhead` or calculated by `spark.comet.memory.overhead.factor` * `spark.executor.memory`. By default, this config is 1.0. | 1.0 | +| spark.comet.columnar.shuffle.spill.threshold | Number of rows to be spilled used for Comet columnar shuffle. For every configured number of rows, a new spill file will be created. Higher value means more memory requirement to buffer shuffle data before flushing to disk. As Comet uses columnar shuffle which is columnar format, higher value usually helps to improve shuffle data compression ratio. This is internal config for testing purpose or advanced tuning. By default, this config is Int.Max. | 2147483647 | +| spark.comet.debug.enabled | Whether to enable debug mode for Comet. By default, this config is false. When enabled, Comet will do additional checks for debugging purpose. For example, validating array when importing arrays from JVM at native side. Note that these checks may be expensive in performance and should only be enabled for debugging purpose. | false | +| spark.comet.enabled | Whether to enable Comet extension for Spark. When this is turned on, Spark will use Comet to read Parquet data source. Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. By default, this config is the value of the env var `ENABLE_COMET` if set, or true otherwise. | true | +| spark.comet.exceptionOnDatetimeRebase | Whether to throw exception when seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. Since Spark 3, dates/timestamps were written according to the Proleptic Gregorian calendar. When this is true, Comet will throw exceptions when seeing these dates/timestamps that were written by Spark version before 3.0. If this is false, these dates/timestamps will be read as if they were written to the Proleptic Gregorian calendar and will not be rebased. | false | +| spark.comet.exec.all.enabled | Whether to enable all Comet operators. By default, this config is false. Note that this config precedes all separate config 'spark.comet.exec..enabled'. That being said, if this config is enabled, separate configs are ignored. | false | +| spark.comet.exec.all.expr.enabled | Whether to enable all Comet exprs. By default, this config is false. Note that this config precedes all separate config 'spark.comet.exec..enabled'. That being said, if this config is enabled, separate configs are ignored. | false | +| spark.comet.exec.broadcast.enabled | Whether to force enabling broadcasting for Comet native operators. By default, this config is false. Comet broadcast feature will be enabled automatically by Comet extension. But for unit tests, we need this feature to force enabling it for invalid cases. So this config is only used for unit test. | false | +| spark.comet.exec.enabled | Whether to enable Comet native vectorized execution for Spark. This controls whether Spark should convert operators into their Comet counterparts and execute them in native space. Note: each operator is associated with a separate config in the format of 'spark.comet.exec..enabled' at the moment, and both the config and this need to be turned on, in order for the operator to be executed in native. By default, this config is false. | false | +| spark.comet.exec.memoryFraction | The fraction of memory from Comet memory overhead that the native memory manager can use for execution. The purpose of this config is to set aside memory for untracked data structures, as well as imprecise size estimation during memory acquisition. Default value is 0.7. | 0.7 | +| spark.comet.exec.shuffle.codec | The codec of Comet native shuffle used to compress shuffle data. Only zstd is supported. | zstd | +| spark.comet.exec.shuffle.enabled | Whether to enable Comet native shuffle. By default, this config is false. Note that this requires setting 'spark.shuffle.manager' to 'org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager'. 'spark.shuffle.manager' must be set before starting the Spark application and cannot be changed during the application. | false | +| spark.comet.memory.overhead.factor | Fraction of executor memory to be allocated as additional non-heap memory per executor process for Comet. Default value is 0.2. | 0.2 | +| spark.comet.memory.overhead.min | Minimum amount of additional memory to be allocated per executor process for Comet, in MiB. | 402653184b | +| spark.comet.nativeLoadRequired | Whether to require Comet native library to load successfully when Comet is enabled. If not, Comet will silently fallback to Spark when it fails to load the native lib. Otherwise, an error will be thrown and the Spark job will be aborted. | false | +| spark.comet.parquet.enable.directBuffer | Whether to use Java direct byte buffer when reading Parquet. By default, this is false | false | +| spark.comet.rowToColumnar.enabled | Whether to enable row to columnar conversion in Comet. When this is turned on, Comet will convert row-based operators in `spark.comet.rowToColumnar.supportedOperatorList` into columnar based before processing. | false | +| spark.comet.rowToColumnar.supportedOperatorList | A comma-separated list of row-based operators that will be converted to columnar format when 'spark.comet.rowToColumnar.enabled' is true | Range,InMemoryTableScan | +| spark.comet.scan.enabled | Whether to enable Comet scan. When this is turned on, Spark will use Comet to read Parquet data source. Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. By default, this config is true. | true | +| spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature of CometScan. By default is disabled. | false | +| spark.comet.scan.preFetch.threadNum | The number of threads running pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that more pre-fetching threads means more memory requirement to store pre-fetched row groups. | 2 | +| spark.comet.schemaEvolution.enabled | Whether to enable schema evolution in Comet. For instance, promoting a integer column to a long column, a float column to a double column, etc. This is automaticallyenabled when reading from Iceberg tables. | false | +| spark.comet.shuffle.preferDictionary.ratio | The ratio of total values to distinct values in a string column to decide whether to prefer dictionary encoding when shuffling the column. If the ratio is higher than this config, dictionary encoding will be used on shuffling string column. This config is effective if it is higher than 1.0. By default, this config is 10.0. Note that this config is only used when 'spark.comet.columnar.shuffle.enabled' is true. | 10.0 | +| spark.comet.use.decimal128 | If true, Comet will always use 128 bits to represent a decimal value, regardless of its precision. If false, Comet will use 32, 64 and 128 bits respectively depending on the precision. N.B. this is NOT a user-facing config but should be inferred and set by Comet itself. | false | +| spark.comet.use.lazyMaterialization | Whether to enable lazy materialization for Comet. When this is turned on, Comet will read Parquet data source lazily for string and binary columns. For filter operations, lazy materialization will improve read performance by skipping unused pages. | true | + \ No newline at end of file From 9f510247c85ed423e13c444d99572cdf7cacee2b Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 10:20:57 -0600 Subject: [PATCH 2/7] Generate configuration guide as part of mvn package --- .../scala/org/apache/comet/CometConf.scala | 53 +++++++++++++++++-- docs/source/user-guide/configs-template.md | 24 +++++++++ docs/source/user-guide/configs.md | 36 ++++++------- spark/pom.xml | 22 ++++++++ 4 files changed, 112 insertions(+), 23 deletions(-) create mode 100644 docs/source/user-guide/configs-template.md diff --git a/common/src/main/scala/org/apache/comet/CometConf.scala b/common/src/main/scala/org/apache/comet/CometConf.scala index b24595313..539c154b9 100644 --- a/common/src/main/scala/org/apache/comet/CometConf.scala +++ b/common/src/main/scala/org/apache/comet/CometConf.scala @@ -19,8 +19,12 @@ package org.apache.comet +import java.io.{BufferedOutputStream, FileOutputStream} import java.util.concurrent.TimeUnit +import scala.collection.mutable.ListBuffer +import scala.io.Source + import org.apache.spark.network.util.ByteUnit import org.apache.spark.network.util.JavaUtils import org.apache.spark.sql.comet.util.Utils @@ -39,6 +43,14 @@ import org.apache.spark.sql.internal.SQLConf * can also explicitly pass a [[SQLConf]] object to the `get` method. */ object CometConf { + + /** List of all configs that is used for generating documentation */ + val allConfs = new ListBuffer[ConfigEntry[_]] + + def register(conf: ConfigEntryWithDefault[_]): Unit = { + allConfs.append(conf) + } + def conf(key: String): ConfigBuilder = ConfigBuilder(key) val COMET_EXEC_CONFIG_PREFIX = "spark.comet.exec"; @@ -341,10 +353,9 @@ object CometConf { val COMET_ROW_TO_COLUMNAR_ENABLED: ConfigEntry[Boolean] = conf("spark.comet.rowToColumnar.enabled") .internal() - .doc(""" - |Whether to enable row to columnar conversion in Comet. When this is turned on, Comet will - |convert row-based operators in `spark.comet.rowToColumnar.supportedOperatorList` into - |columnar based before processing.""".stripMargin) + .doc("Whether to enable row to columnar conversion in Comet. When this is turned on, " + + "Comet will convert row-based operators in " + + "`spark.comet.rowToColumnar.supportedOperatorList` into columnar based before processing.") .booleanConf .createWithDefault(false) @@ -475,7 +486,7 @@ private class TypedConfigBuilder[T]( /** Creates a [[ConfigEntry]] that has a default value. */ def createWithDefault(default: T): ConfigEntry[T] = { val transformedDefault = converter(stringConverter(default)) - new ConfigEntryWithDefault[T]( + val conf = new ConfigEntryWithDefault[T]( parent.key, transformedDefault, converter, @@ -483,6 +494,8 @@ private class TypedConfigBuilder[T]( parent._doc, parent._public, parent._version) + CometConf.register(conf) + conf } } @@ -612,3 +625,33 @@ private[comet] case class ConfigBuilder(key: String) { private object ConfigEntry { val UNDEFINED = "" } + +/** + * Utility for generating markdown documentation from the configs. + */ +object CometConfGenerateDocs { + def main(args: Array[String]): Unit = { + if (args.length != 2) { + // scalastyle:off println + println("Missing arguments for template file and output file") + // scalastyle:on println + sys.exit(-1) + } + val templateFilename = args.head + val outputFilename = args(1) + val w = new BufferedOutputStream(new FileOutputStream(outputFilename)) + for (line <- Source.fromFile(templateFilename).getLines()) { + if (line.trim == "") { + val confs = CometConf.allConfs.sortBy(_.key) + w.write(s"| Config | Description | Default Value |\n".getBytes) + w.write(s"|--------|-------------|---------------|\n".getBytes) + for (conf <- confs) { + w.write(s"| ${conf.key} | ${conf.doc.trim} | ${conf.defaultValueString} |\n".getBytes) + } + } else { + w.write(s"${line.trim}\n".getBytes) + } + } + w.close() + } +} diff --git a/docs/source/user-guide/configs-template.md b/docs/source/user-guide/configs-template.md new file mode 100644 index 000000000..f5c15b696 --- /dev/null +++ b/docs/source/user-guide/configs-template.md @@ -0,0 +1,24 @@ + + +# Comet Configuration Settings + +Comet provides the following configuration settings. + + diff --git a/docs/source/user-guide/configs.md b/docs/source/user-guide/configs.md index f330e2aed..bb62db5bc 100644 --- a/docs/source/user-guide/configs.md +++ b/docs/source/user-guide/configs.md @@ -1,25 +1,26 @@ -# Comet Configuration Guide +# Comet Configuration Settings + +Comet provides the following configuration settings. - | Config | Description | Default Value | |--------|-------------|---------------| | spark.comet.ansi.enabled | Comet does not respect ANSI mode in most cases and by default will not accelerate queries when ansi mode is enabled. Enable this setting to test Comet's experimental support for ANSI mode. This should not be used in production. | false | @@ -46,7 +47,7 @@ | spark.comet.memory.overhead.min | Minimum amount of additional memory to be allocated per executor process for Comet, in MiB. | 402653184b | | spark.comet.nativeLoadRequired | Whether to require Comet native library to load successfully when Comet is enabled. If not, Comet will silently fallback to Spark when it fails to load the native lib. Otherwise, an error will be thrown and the Spark job will be aborted. | false | | spark.comet.parquet.enable.directBuffer | Whether to use Java direct byte buffer when reading Parquet. By default, this is false | false | -| spark.comet.rowToColumnar.enabled | Whether to enable row to columnar conversion in Comet. When this is turned on, Comet will convert row-based operators in `spark.comet.rowToColumnar.supportedOperatorList` into columnar based before processing. | false | +| spark.comet.rowToColumnar.enabled | Whether to enable row to columnar conversion in Comet. When this is turned on, Comet will convert row-based operators in `spark.comet.rowToColumnar.supportedOperatorList` into columnar based before processing. | false | | spark.comet.rowToColumnar.supportedOperatorList | A comma-separated list of row-based operators that will be converted to columnar format when 'spark.comet.rowToColumnar.enabled' is true | Range,InMemoryTableScan | | spark.comet.scan.enabled | Whether to enable Comet scan. When this is turned on, Spark will use Comet to read Parquet data source. Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. By default, this config is true. | true | | spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature of CometScan. By default is disabled. | false | @@ -55,4 +56,3 @@ | spark.comet.shuffle.preferDictionary.ratio | The ratio of total values to distinct values in a string column to decide whether to prefer dictionary encoding when shuffling the column. If the ratio is higher than this config, dictionary encoding will be used on shuffling string column. This config is effective if it is higher than 1.0. By default, this config is 10.0. Note that this config is only used when 'spark.comet.columnar.shuffle.enabled' is true. | 10.0 | | spark.comet.use.decimal128 | If true, Comet will always use 128 bits to represent a decimal value, regardless of its precision. If false, Comet will use 32, 64 and 128 bits respectively depending on the precision. N.B. this is NOT a user-facing config but should be inferred and set by Comet itself. | false | | spark.comet.use.lazyMaterialization | Whether to enable lazy materialization for Comet. When this is turned on, Comet will read Parquet data source lazily for string and binary columns. For filter operations, lazy materialization will improve read performance by skipping unused pages. | true | - \ No newline at end of file diff --git a/spark/pom.xml b/spark/pom.xml index 66ff82909..9392b7fe9 100644 --- a/spark/pom.xml +++ b/spark/pom.xml @@ -264,6 +264,28 @@ under the License. + + org.codehaus.mojo + exec-maven-plugin + 3.2.0 + + + generate-config-docs + package + + java + + + org.apache.comet.CometConfGenerateDocs + + docs/source/user-guide/configs-template.md + docs/source/user-guide/configs.md + + compile + + + + From 57511962e3d49dbba2b23a17d1032e178539fd3d Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 10:28:58 -0600 Subject: [PATCH 3/7] formatting --- docs/source/index.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/index.rst b/docs/source/index.rst index c428a7497..3cb6bdfba 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -41,7 +41,7 @@ as a native runtime to achieve improvement in terms of query efficiency and quer :caption: User Guide Supported Expressions - user-guide/configs + Configuration Settings user-guide/compatibility .. _toc.links: From 93e5fb55f86c66ce67a55d717e32aefd7733e396 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 12:18:34 -0600 Subject: [PATCH 4/7] scalafix --- common/src/main/scala/org/apache/comet/CometConf.scala | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/common/src/main/scala/org/apache/comet/CometConf.scala b/common/src/main/scala/org/apache/comet/CometConf.scala index 539c154b9..a7bb9a812 100644 --- a/common/src/main/scala/org/apache/comet/CometConf.scala +++ b/common/src/main/scala/org/apache/comet/CometConf.scala @@ -643,8 +643,8 @@ object CometConfGenerateDocs { for (line <- Source.fromFile(templateFilename).getLines()) { if (line.trim == "") { val confs = CometConf.allConfs.sortBy(_.key) - w.write(s"| Config | Description | Default Value |\n".getBytes) - w.write(s"|--------|-------------|---------------|\n".getBytes) + w.write("| Config | Description | Default Value |\n".getBytes) + w.write("|--------|-------------|---------------|\n".getBytes) for (conf <- confs) { w.write(s"| ${conf.key} | ${conf.doc.trim} | ${conf.defaultValueString} |\n".getBytes) } From a2a32a5a61e7fdc212a2ec80ea879c5d8c7c4f82 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 13:59:22 -0600 Subject: [PATCH 5/7] add maven usage to comment --- common/src/main/scala/org/apache/comet/CometConf.scala | 2 ++ 1 file changed, 2 insertions(+) diff --git a/common/src/main/scala/org/apache/comet/CometConf.scala b/common/src/main/scala/org/apache/comet/CometConf.scala index a7bb9a812..cc5ee9b4f 100644 --- a/common/src/main/scala/org/apache/comet/CometConf.scala +++ b/common/src/main/scala/org/apache/comet/CometConf.scala @@ -628,6 +628,8 @@ private object ConfigEntry { /** * Utility for generating markdown documentation from the configs. + * + * This is invoked when running `mvn clean package -DskipTests`. */ object CometConfGenerateDocs { def main(args: Array[String]): Unit = { From ad4eca3af6004d3bc5b6674450a92621a97c1312 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 16:56:45 -0600 Subject: [PATCH 6/7] do not publish internal configs --- common/src/main/scala/org/apache/comet/CometConf.scala | 4 +++- docs/source/user-guide/configs.md | 6 ------ 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/common/src/main/scala/org/apache/comet/CometConf.scala b/common/src/main/scala/org/apache/comet/CometConf.scala index cc5ee9b4f..3c146a990 100644 --- a/common/src/main/scala/org/apache/comet/CometConf.scala +++ b/common/src/main/scala/org/apache/comet/CometConf.scala @@ -48,7 +48,9 @@ object CometConf { val allConfs = new ListBuffer[ConfigEntry[_]] def register(conf: ConfigEntryWithDefault[_]): Unit = { - allConfs.append(conf) + if (conf.isPublic) { + allConfs.append(conf) + } } def conf(key: String): ConfigBuilder = ConfigBuilder(key) diff --git a/docs/source/user-guide/configs.md b/docs/source/user-guide/configs.md index bb62db5bc..3a16cd47d 100644 --- a/docs/source/user-guide/configs.md +++ b/docs/source/user-guide/configs.md @@ -29,10 +29,8 @@ Comet provides the following configuration settings. | spark.comet.columnar.shuffle.async.enabled | Whether to enable asynchronous shuffle for Arrow-based shuffle. By default, this config is false. | false | | spark.comet.columnar.shuffle.async.max.thread.num | Maximum number of threads on an executor used for Comet async columnar shuffle. By default, this config is 100. This is the upper bound of total number of shuffle threads per executor. In other words, if the number of cores * the number of shuffle threads per task `spark.comet.columnar.shuffle.async.thread.num` is larger than this config. Comet will use this config as the number of shuffle threads per executor instead. | 100 | | spark.comet.columnar.shuffle.async.thread.num | Number of threads used for Comet async columnar shuffle per shuffle task. By default, this config is 3. Note that more threads means more memory requirement to buffer shuffle data before flushing to disk. Also, more threads may not always improve performance, and should be set based on the number of cores available. | 3 | -| spark.comet.columnar.shuffle.batch.size | Batch size when writing out sorted spill files on the native side. Note that this should not be larger than batch size (i.e., `spark.comet.batchSize`). Otherwise it will produce larger batches than expected in the native operator after shuffle. | 8192 | | spark.comet.columnar.shuffle.enabled | Force Comet to only use columnar shuffle for CometScan and Spark regular operators. If this is enabled, Comet native shuffle will not be enabled but only Arrow shuffle. By default, this config is false. | false | | spark.comet.columnar.shuffle.memory.factor | Fraction of Comet memory to be allocated per executor process for Comet shuffle. Comet memory size is specified by `spark.comet.memoryOverhead` or calculated by `spark.comet.memory.overhead.factor` * `spark.executor.memory`. By default, this config is 1.0. | 1.0 | -| spark.comet.columnar.shuffle.spill.threshold | Number of rows to be spilled used for Comet columnar shuffle. For every configured number of rows, a new spill file will be created. Higher value means more memory requirement to buffer shuffle data before flushing to disk. As Comet uses columnar shuffle which is columnar format, higher value usually helps to improve shuffle data compression ratio. This is internal config for testing purpose or advanced tuning. By default, this config is Int.Max. | 2147483647 | | spark.comet.debug.enabled | Whether to enable debug mode for Comet. By default, this config is false. When enabled, Comet will do additional checks for debugging purpose. For example, validating array when importing arrays from JVM at native side. Note that these checks may be expensive in performance and should only be enabled for debugging purpose. | false | | spark.comet.enabled | Whether to enable Comet extension for Spark. When this is turned on, Spark will use Comet to read Parquet data source. Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. By default, this config is the value of the env var `ENABLE_COMET` if set, or true otherwise. | true | | spark.comet.exceptionOnDatetimeRebase | Whether to throw exception when seeing dates/timestamps from the legacy hybrid (Julian + Gregorian) calendar. Since Spark 3, dates/timestamps were written according to the Proleptic Gregorian calendar. When this is true, Comet will throw exceptions when seeing these dates/timestamps that were written by Spark version before 3.0. If this is false, these dates/timestamps will be read as if they were written to the Proleptic Gregorian calendar and will not be rebased. | false | @@ -47,12 +45,8 @@ Comet provides the following configuration settings. | spark.comet.memory.overhead.min | Minimum amount of additional memory to be allocated per executor process for Comet, in MiB. | 402653184b | | spark.comet.nativeLoadRequired | Whether to require Comet native library to load successfully when Comet is enabled. If not, Comet will silently fallback to Spark when it fails to load the native lib. Otherwise, an error will be thrown and the Spark job will be aborted. | false | | spark.comet.parquet.enable.directBuffer | Whether to use Java direct byte buffer when reading Parquet. By default, this is false | false | -| spark.comet.rowToColumnar.enabled | Whether to enable row to columnar conversion in Comet. When this is turned on, Comet will convert row-based operators in `spark.comet.rowToColumnar.supportedOperatorList` into columnar based before processing. | false | | spark.comet.rowToColumnar.supportedOperatorList | A comma-separated list of row-based operators that will be converted to columnar format when 'spark.comet.rowToColumnar.enabled' is true | Range,InMemoryTableScan | | spark.comet.scan.enabled | Whether to enable Comet scan. When this is turned on, Spark will use Comet to read Parquet data source. Note that to enable native vectorized execution, both this config and 'spark.comet.exec.enabled' need to be enabled. By default, this config is true. | true | | spark.comet.scan.preFetch.enabled | Whether to enable pre-fetching feature of CometScan. By default is disabled. | false | | spark.comet.scan.preFetch.threadNum | The number of threads running pre-fetching for CometScan. Effective if spark.comet.scan.preFetch.enabled is enabled. By default it is 2. Note that more pre-fetching threads means more memory requirement to store pre-fetched row groups. | 2 | -| spark.comet.schemaEvolution.enabled | Whether to enable schema evolution in Comet. For instance, promoting a integer column to a long column, a float column to a double column, etc. This is automaticallyenabled when reading from Iceberg tables. | false | | spark.comet.shuffle.preferDictionary.ratio | The ratio of total values to distinct values in a string column to decide whether to prefer dictionary encoding when shuffling the column. If the ratio is higher than this config, dictionary encoding will be used on shuffling string column. This config is effective if it is higher than 1.0. By default, this config is 10.0. Note that this config is only used when 'spark.comet.columnar.shuffle.enabled' is true. | 10.0 | -| spark.comet.use.decimal128 | If true, Comet will always use 128 bits to represent a decimal value, regardless of its precision. If false, Comet will use 32, 64 and 128 bits respectively depending on the precision. N.B. this is NOT a user-facing config but should be inferred and set by Comet itself. | false | -| spark.comet.use.lazyMaterialization | Whether to enable lazy materialization for Comet. When this is turned on, Comet will read Parquet data source lazily for string and binary columns. For filter operations, lazy materialization will improve read performance by skipping unused pages. | true | From 64bab63b75ad05aec8d5c101f1a92eb34ded3db3 Mon Sep 17 00:00:00 2001 From: Andy Grove Date: Mon, 29 Apr 2024 16:58:38 -0600 Subject: [PATCH 7/7] improve check for public configs --- common/src/main/scala/org/apache/comet/CometConf.scala | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/common/src/main/scala/org/apache/comet/CometConf.scala b/common/src/main/scala/org/apache/comet/CometConf.scala index 3c146a990..ca4bf4709 100644 --- a/common/src/main/scala/org/apache/comet/CometConf.scala +++ b/common/src/main/scala/org/apache/comet/CometConf.scala @@ -48,9 +48,7 @@ object CometConf { val allConfs = new ListBuffer[ConfigEntry[_]] def register(conf: ConfigEntryWithDefault[_]): Unit = { - if (conf.isPublic) { - allConfs.append(conf) - } + allConfs.append(conf) } def conf(key: String): ConfigBuilder = ConfigBuilder(key) @@ -646,7 +644,8 @@ object CometConfGenerateDocs { val w = new BufferedOutputStream(new FileOutputStream(outputFilename)) for (line <- Source.fromFile(templateFilename).getLines()) { if (line.trim == "") { - val confs = CometConf.allConfs.sortBy(_.key) + val publicConfigs = CometConf.allConfs.filter(_.isPublic) + val confs = publicConfigs.sortBy(_.key) w.write("| Config | Description | Default Value |\n".getBytes) w.write("|--------|-------------|---------------|\n".getBytes) for (conf <- confs) {