Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: add additional verification tweak concurrent users performance testing #2425

7 changes: 3 additions & 4 deletions .github/actions/performance-tests/art.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
config:
target: "http://localhost:9545"
target: "-=http_request_protocol=-://-=endpoint_to_test=-"
phases:
- duration: 60
arrivalRate: 10
- duration: 300
arrivalRate: 400
defaults:
headers:
content-type: "application/json"
Expand All @@ -19,7 +19,6 @@ config:
- type: "html"
filename: "artillery_report.html"
logLevel: debug

scenarios:
- name: web3_clientVersion
flow:
Expand Down
77 changes: 75 additions & 2 deletions .github/workflows/ci-nightly-performance-testing.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,20 @@ name: "NIGHTLY:EVM:PERFORMANCE:TESTING"

on:
workflow_dispatch:
inputs:
endpoint:
description: 'The EVM endpoint you want to test.'
required: false
default: 'http://localhost:9545'
schedule:
- cron: '0 0 * * *' # Runs every day at midnight UTC

jobs:
nightly_evm_api_performance_test:
name: "NIGHTLY:EVM:API:PERFORMANCE:TEST"
runs-on: "buildjet-16vcpu-ubuntu-2204"
env:
endpoint_to_test: "http://localhost:9545"
steps:
- uses: actions/checkout@v4

Expand All @@ -25,16 +32,62 @@ jobs:
run: |
npm install -g artillery@latest

- name: "CONFIGURE:PERFORMANCE:TEMPLATE"
run: |
# Check if the inputs.endpoint starts with https:// or http://
if [[ "${{inputs.endpoint}}" =~ ^https:// ]]; then
# Set the http_request_protocol to https if the URL starts with https://
export http_request_protocol=https
elif [[ "${{inputs.endpoint}}" =~ ^http:// ]]; then
# Set the http_request_protocol to http if the URL starts with http://
export http_request_protocol=http
else
# If the URL doesn't start with either http:// or https://, print an error and exit
echo "Please input a url with either https:// or http://" && exit 1
fi

# Check if inputs.endpoint is different from the default "http://localhost:9545"
if [ "${{inputs.endpoint}}" != "http://localhost:9545" ]; then
# Check if inputs.endpoint is not empty
if [ -n "${{inputs.endpoint}}" ]; then
# If inputs.endpoint isn't different from the default and is not empty, use inputs.endpoint
export endpoint_to_test=${{inputs.endpoint}}
# Remove the protocol (http:// or https://) from endpoint_to_test
endpoint_to_test=${endpoint_to_test#http://}
endpoint_to_test=${endpoint_to_test#https://}
else
# If inputs.endpoint is empty, strip the protocol from endpoint_to_test
endpoint_to_test=${endpoint_to_test#http://}
endpoint_to_test=${endpoint_to_test#https://}
fi
else
# If inputs.endpoint is "http://localhost:9545", strip the protocol from endpoint_to_test
endpoint_to_test=${endpoint_to_test#http://}
endpoint_to_test=${endpoint_to_test#https://}
fi

# Replace -=endpoint_to_test=- placeholder in the art.yaml file with the endpoint_to_test value
sed -i "s/-=endpoint_to_test=-/${endpoint_to_test}/g" .github/actions/performance-tests/art.yaml
# Replace -=http_request_protocol=- placeholder in the art.yaml file with the http_request_protocol value
sed -i "s/-=http_request_protocol=-/${http_request_protocol}/g" .github/actions/performance-tests/art.yaml

- name: "EXECUTE:PERFORMANCE:TESTS"
run: |
artillery run .github/actions/performance-tests/art.yaml --record --key ${{ secrets.ARTILLERY_KEY }} --output ./report.json
# Execute Artillery to run performance tests using the specified configuration file
artillery run .github/actions/performance-tests/art.yaml \
--record \
--key ${{ secrets.ARTILLERY_KEY }} \
--output ./report.json

- name: "CHECK:TEST:RESULTS"
run: |
# Check Artillery exit status
if [ $? -ne 0 ]; then
echo "Artillery command failed to execute."
exit 1
fi

# Parse the results.json file to check for failed vusers and http response codes
# Parse the results.json file to check for failed vusers and HTTP response codes
failed_vusers=$(jq '.aggregate.counters["vusers.failed"] // 0' ./report.json)
http_codes_200=$(jq '.aggregate.counters["http.codes.200"] // 0' ./report.json)
http_responses=$(jq '.aggregate.counters["http.responses"] // 0' ./report.json)
Expand All @@ -45,6 +98,26 @@ jobs:
else
echo "EVM Performance Testing Successful"
fi

# Read the JSON file and extract the required metrics
p99_values=$(jq -r '.aggregate.summaries["http.response_time"].p99, .aggregate.summaries["plugins.metrics-by-endpoint.response_time./"].p99, .aggregate.summaries["vusers.session_length"].p99' ./report.json)
p50_values=$(jq -r '.aggregate.summaries["http.response_time"].p50, .aggregate.summaries["plugins.metrics-by-endpoint.response_time./"].p50, .aggregate.summaries["vusers.session_length"].p50' ./report.json)

# Check P99 values
for p99 in $p99_values; do
if (( $(echo "$p99 > 2000" | bc -l) )); then
echo "P99 value $p99 exceeds 2000ms"
exit 1
fi
done

# Check P50 values
for p50 in $p50_values; do
if (( $(echo "$p50 > 40" | bc -l) )); then
echo "P50 value $p50 exceeds 40ms"
exit 1
fi
done

- name: "GENERATE:REPORT"
if: always()
Expand Down
1 change: 1 addition & 0 deletions changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@
* [2335](https://github.com/zeta-chain/node/pull/2335) - ci: updated the artillery report to publish to artillery cloud
* [2377](https://github.com/zeta-chain/node/pull/2377) - ci: adjusted sast-linters.yml to not scan itself, nor alert on removal of nosec.
* [2400](https://github.com/zeta-chain/node/pull/2400) - ci: adjusted the performance test to pass or fail pipeline based on test results, alert slack, and launch network with state. Fixed connection issues as well.
* [2425](https://github.com/zeta-chain/node/pull/2425) - Added verification to performance testing pipeline to ensure p99 aren't above 2000ms and p50 aren't above 40ms, Tweaked the config to 400 users requests per second. 425 is the current max before it starts failing.

### Documentation

Expand Down
Loading