-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build: Run Spark SQL tests for 3.4 #166
Changes from 7 commits
884386b
7bbd72a
c307518
10fc56a
e76b75b
194a5e4
add07e3
0d64757
17d638f
0c9ed63
83f138a
7d76194
fc4f457
c43ef49
14c53e9
6c767f0
25b08da
d1324ba
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, | ||
# software distributed under the License is distributed on an | ||
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
# KIND, either express or implied. See the License for the | ||
# specific language governing permissions and limitations | ||
# under the License. | ||
|
||
name: Setup Spark Builder | ||
description: 'Setup Apache Spark to run SQL tests' | ||
inputs: | ||
spark-short-version: | ||
description: 'The Apache Spark short version (e.g., 3.4) to build' | ||
required: true | ||
default: '3.4' | ||
spark-version: | ||
description: 'The Apache Spark version (e.g., 3.4.2) to build' | ||
required: true | ||
default: '3.4.2' | ||
comet-version: | ||
description: 'The Comet version to use for Spark' | ||
required: true | ||
default: '0.1.0-SNAPSHOT' | ||
runs: | ||
using: "composite" | ||
steps: | ||
- name: Clone Spark repo | ||
uses: actions/checkout@v4 | ||
with: | ||
repository: apache/spark | ||
path: apache-spark | ||
ref: v${{inputs.spark-version}} | ||
fetch-depth: 1 | ||
|
||
- name: Setup Spark for Comet | ||
shell: bash | ||
run: | | ||
cd apache-spark | ||
git apply ../dev/diffs/${{inputs.spark-version}}.diff | ||
../mvnw -nsu -q versions:set-property -Dproperty=comet.version -DnewVersion=${{inputs.comet-version}} -DgenerateBackupPoms=false | ||
|
||
- name: Cache Maven dependencies | ||
uses: actions/cache@v4 | ||
with: | ||
path: | | ||
~/.m2/repository | ||
/root/.m2/repository | ||
key: ${{ runner.os }}-java-maven-${{ hashFiles('**/pom.xml') }} | ||
restore-keys: | | ||
${{ runner.os }}-java-maven- | ||
|
||
- name: Build Comet | ||
shell: bash | ||
run: | | ||
PROFILES="-Pspark-${{inputs.spark-short-version}}" make release |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,214 @@ | ||
# Licensed to the Apache Software Foundation (ASF) under one | ||
# or more contributor license agreements. See the NOTICE file | ||
# distributed with this work for additional information | ||
# regarding copyright ownership. The ASF licenses this file | ||
# to you under the Apache License, Version 2.0 (the | ||
# "License"); you may not use this file except in compliance | ||
# with the License. You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, | ||
# software distributed under the License is distributed on an | ||
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY | ||
# KIND, either express or implied. See the License for the | ||
# specific language governing permissions and limitations | ||
# under the License. | ||
|
||
name: Spark SQL Tests | ||
|
||
concurrency: | ||
group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }} | ||
cancel-in-progress: true | ||
|
||
on: | ||
push: | ||
paths-ignore: | ||
- "doc/**" | ||
- "**.md" | ||
pull_request: | ||
paths-ignore: | ||
- "doc/**" | ||
- "**.md" | ||
# manual trigger | ||
# https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow | ||
workflow_dispatch: | ||
|
||
env: | ||
RUST_VERSION: nightly | ||
|
||
jobs: | ||
spark-sql-catalyst: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-catalyst/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/catalyst tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt catalyst/test | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looks like all the jobs are similar, I think we define a new dimension into the matrix, such as: matrix:
os: [ubuntu-latest]
java-version: [11]
spark-version: [{short: '3.4', full: '3.4.2'}]
spark-test-modules:
- {name: "catalyst", sbt-options: "catalyst/test"}
- ... For the name part, I think we can remove the spark-sql prefix so the job name can be short. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Let me give it a try. In |
||
|
||
spark-sql-core-1: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-core-1/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/core-1 tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt sql/test -Dtest.exclude.tags=org.apache.spark.tags.ExtendedSQLTest,org.apache.spark.tags.SlowSQLTest | ||
|
||
spark-sql-core-2: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-core-2/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/core-2 tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt "sql/testOnly *.SQLQueryTestSuite *.ExpressionsSchemaSuite *.ParquetV1FilterSuite *.ParquetV2FilterSuite *.ParquetV1SchemaPruningSuite *.ParquetV2SchemaPruningSuite org.apache.spark.sql.TPCDSQuery*" | ||
|
||
spark-sql-core-3: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-core-3/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/core-3 tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt sql/testOnly * -- -n org.apache.spark.tags.SlowSQLTest | ||
|
||
spark-sql-hive-1: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-hive-1/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/hive-1 tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt hive/test -Dtest.exclude.tags=org.apache.spark.tags.ExtendedHiveTest | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hive-1 and hive-2's test time are unbalanced. I think we should also exclude tests with tag I'm not sure how to balance tests in sql-core-{1,2,3} though. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah wasn't aware there is |
||
|
||
spark-sql-hive-2: | ||
strategy: | ||
matrix: | ||
os: [ubuntu-latest] | ||
java-version: [11] | ||
spark-version: [{short: '3.4', full: '3.4.2'}] | ||
fail-fast: false | ||
name: spark-sql-hive-2/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }} | ||
runs-on: ${{ matrix.os }} | ||
container: | ||
image: amd64/rust | ||
steps: | ||
- uses: actions/checkout@v4 | ||
- name: Setup Rust & Java toolchain | ||
uses: ./.github/actions/setup-builder | ||
with: | ||
rust-version: ${{env.RUST_VERSION}} | ||
jdk-version: ${{ matrix.java-version }} | ||
- name: Setup Spark | ||
uses: ./.github/actions/setup-spark-builder | ||
with: | ||
spark-version: ${{ matrix.spark-version.full }} | ||
spark-short-version: ${{ matrix.spark-version.short }} | ||
comet-version: '0.1.0-SNAPSHOT' # TODO: get this from pom.xml | ||
- name: Run Spark sql/hive-2 tests | ||
run: | | ||
cd apache-spark | ||
ENABLE_COMET=true build/sbt "hive/testOnly *.HiveSparkSubmitSuite *.VersionsSuite *.HiveDDLSuite *.HiveCatalogedDDLSuite *.HiveSerDeSuite *.HiveQuerySuite *.SQLQuerySuite" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't we need to add There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It seems not necessary. I verified the pipeline does execute Hive tests. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yea, I also looked at the pipeline and verified it locally. |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can help review this once it's ready.
On the surface, I am concerned to maintain 1k+ patch to make this work. It would be problematic to maintain that. Is there any successful example that has similar setup?
Is it possible to add spark-sql test jar to the project and run tests directly against the test jar with comet enabled and some incompatible tests excluded? That setup simulates how end users uses comet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems difficult to use the test jar approach. Even if we are able to enable Comet for the Spark tests, we'd need to make modifications for many of them as shown in the diff.
On the other hand, the diff is tied to a particular Spark version like 3.4.x, and rarely need to be updated (from our experience). We only need to create a new diff with a new Spark release which typically happens every 6 months to 1 year.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see.
But once modification is needed, it would be problematic to update the patches directly. If we are going to go with this approach. I'd like to propose some improvement to refine the maintaining process:
sunchao/spark
to start?git
command.By hosting patches in a dedicated branch, I think we can track all the modifications in history.
Of course, we should include a README in dev/diffs about how the diffs are generated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think it'd be useful to have a forked repo tracking the Comet changes to Spark. Maybe we can just use branches in this repo? We could run tests to validate the changes too through Github CI.
Alternatively we can use my personal Spark fork too but it just doesn't seem like the ideal place (for instance, who should be able to update the repo?).
cc @viirya @kazuyukitanimura @alamb for more inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For updating the diff, I think we can just create a doc to explain how or small script to automate. Basically what we need to cover is
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, something like:
where
spark-3.4.2
andspark-3.5.1
are Spark fork with the diff applied. We will need to keep the branch updated since sometimes Comet will introduce breaking changes that require Spark changes. Comparing to personal repo, it is easier to maintain.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we only test with released Spark version, I think the diff is basically no need to changed at all (as the Spark code is not changed), except we have something in Comet which needs to update the diff. It makes me wonder if we need to have the whole Spark codebase only for the diff. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They may need to be updated when Comet introduces some changes (for instance, an extra parameter for
CometBatchScanExec
) that require Spark side change. One advantage of having the branches is we are able to track the history of all these. IMO it is something good to have but not essential.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's allowed, then it would be ideal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe having a fork of the spark code (rather than a diff that is applied to a local checkout) would be eaiser to understand / maintain over the long run.
I think the key would be to make sure what is going on with the branches is well documented (especially the rationale)