-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(KONFLUX-2084): refactor-load-test-not-to-conflict-on-entity-names #1075
feat(KONFLUX-2084): refactor-load-test-not-to-conflict-on-entity-names #1075
Conversation
4457eee
to
b71fce2
Compare
/test ? |
@naftalysh: The following commands are available to trigger required jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test load-test-ci-10u-10t |
b71fce2
to
5c30cd7
Compare
/test load-test-ci-10u-10t |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good besides these two comments on duplicating functions.
8be47be
to
5c30cd7
Compare
85f1d94
to
c65d0e7
Compare
/test ? |
@naftalysh: The following commands are available to trigger required jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test load-test-ci-10u-10t |
/retest |
1 similar comment
/retest |
c65d0e7
to
085c0f8
Compare
/test load-test-ci-10u-10t |
085c0f8
to
f65b436
Compare
/test load-test-ci-10u-10t |
/retest ? |
@naftalysh: The
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
/test load-test-ci-10u-10t-java |
/retest |
/test load-test-ci-10u-10t |
1 similar comment
/test load-test-ci-10u-10t |
a834d18
to
9469767
Compare
/test ? |
@naftalysh: The following commands are available to trigger required jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/test load-test-ci-10u-10t |
fab1c0a
to
21ea1e6
Compare
8ea27a0
to
21ea1e6
Compare
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
randomized applicationName, itsName, ComponentDetectionQueryName and componentName Signed-Off-By: Naftaly Shprai <[email protected]>
b6e6244
to
98cce9a
Compare
Quality Gate passedIssues Measures |
🚨 This is a CI system failure, please consult with the QE team. Click to view logs�[36mINFO�[0m[2024-04-10T13:53:08Z] ci-operator version v20240410-f3dd7606f �[36mINFO�[0m[2024-04-10T13:53:08Z] Loading configuration from https://config.ci.openshift.org for redhat-appstudio/e2e-tests@main �[36mINFO�[0m[2024-04-10T13:53:08Z] Resolved source https://github.com/redhat-appstudio/e2e-tests to main@f58bb7d1, merging: #1075 0ab48ae3 @naftalysh �[33mWARN�[0m[2024-04-10T13:53:08Z] skipped directory "..2024_04_10_13_53_01.1949805208" when creating secret from directory "/secrets/ci-pull-credentials" �[33mWARN�[0m[2024-04-10T13:53:08Z] skipped directory "..2024_04_10_13_53_01.3833452866" when creating secret from directory "/usr/local/load-test-ci-10u-10t-go-cluster-profile" �[36mINFO�[0m[2024-04-10T13:53:08Z] Requesting 4.13 from https://api.integration.openshift.com/api/upgrades_info/graph?arch=amd64&channel=fast-4.13 �[36mINFO�[0m[2024-04-10T13:53:08Z] Resolved release latest to quay.io/openshift-release-dev/ocp-release@sha256:dd58c982a2166dcac5ce8f390f8b26b36df27ac765c4e012a670a9c0bac909df �[36mINFO�[0m[2024-04-10T13:53:08Z] Using namespace https://console-openshift-console.apps.build05.l9oh.p1.openshiftapps.com/k8s/cluster/projects/ci-op-k95y89v7 �[36mINFO�[0m[2024-04-10T13:53:08Z] Running [input:root], [input:origin-centos-8], [input:ocp-4.12-upi-installer], [input:ocp-4.14-upi-installer], [release:latest], src, e2e-test-runner, [output:stable:e2e-test-runner], [images], load-test-ci-10u-10t-go �[36mINFO�[0m[2024-04-10T13:53:09Z] Tagging ocp/builder:rhel-9-golang-1.20-openshift-4.14 into pipeline:root. �[36mINFO�[0m[2024-04-10T13:53:09Z] Tagging ocp/4.14:upi-installer into pipeline:ocp-4.14-upi-installer. �[36mINFO�[0m[2024-04-10T13:53:09Z] Tagging ocp/4.12:upi-installer into pipeline:ocp-4.12-upi-installer. �[36mINFO�[0m[2024-04-10T13:53:09Z] Tagging origin/centos:8 into pipeline:origin-centos-8. �[36mINFO�[0m[2024-04-10T13:53:09Z] Building src �[36mINFO�[0m[2024-04-10T13:53:09Z] Found existing build "src-amd64" �[36mINFO�[0m[2024-04-10T13:53:09Z] Build src-amd64 succeeded after 2m7s �[33mWARN�[0m[2024-04-10T13:53:09Z] Failed gathering successful build src-amd64 logs into artifacts. �[33merror�[0m=error: Unable to retrieve logs for build src-amd64: pod "src-amd64-build" not found �[36mINFO�[0m[2024-04-10T13:53:09Z] Importing release image latest. �[36mINFO�[0m[2024-04-10T13:53:09Z] Requesting 4.13 from https://api.integration.openshift.com/api/upgrades_info/graph?arch=amd64&channel=fast-4.13 �[36mINFO�[0m[2024-04-10T13:53:09Z] Resolved release latest to quay.io/openshift-release-dev/ocp-release@sha256:dd58c982a2166dcac5ce8f390f8b26b36df27ac765c4e012a670a9c0bac909df �[36mINFO�[0m[2024-04-10T13:53:09Z] Image ci-op-k95y89v7/pipeline:src created �[36mfor-build�[0m=src �[36mINFO�[0m[2024-04-10T13:53:09Z] Building e2e-test-runner �[36mINFO�[0m[2024-04-10T13:53:09Z] Found existing build "e2e-test-runner-amd64" �[36mINFO�[0m[2024-04-10T13:53:09Z] Build e2e-test-runner-amd64 succeeded after 1m29s �[36mINFO�[0m[2024-04-10T13:53:09Z] Image ci-op-k95y89v7/pipeline:e2e-test-runner created �[36mfor-build�[0m=e2e-test-runner �[36mINFO�[0m[2024-04-10T13:53:31Z] Imported release 4.13.39 created at 2024-04-04 20:45:44 +0000 UTC with 183 images to tag release:latest �[36mINFO�[0m[2024-04-10T13:53:31Z] Tagging e2e-test-runner into stable �[36mINFO�[0m[2024-04-10T13:53:31Z] Acquiring leases for test load-test-ci-10u-10t-go: [aws-rhtap-performance-quota-slice] �[36mINFO�[0m[2024-04-10T13:53:32Z] Acquired 1 lease(s) for aws-rhtap-performance-quota-slice: [eu-west-1--aws-rhtap-performance-quota-slice-5] �[36mINFO�[0m[2024-04-10T13:53:32Z] Running multi-stage test load-test-ci-10u-10t-go �[36mINFO�[0m[2024-04-10T13:53:32Z] Running multi-stage phase pre �[36mINFO�[0m[2024-04-10T13:53:32Z] Running step load-test-ci-10u-10t-go-ipi-conf. �[36mINFO�[0m[2024-04-10T13:53:40Z] Step load-test-ci-10u-10t-go-ipi-conf succeeded after 7s. �[36mINFO�[0m[2024-04-10T13:53:40Z] Running step load-test-ci-10u-10t-go-ipi-conf-telemetry. �[36mINFO�[0m[2024-04-10T13:53:48Z] Step load-test-ci-10u-10t-go-ipi-conf-telemetry succeeded after 8s. �[36mINFO�[0m[2024-04-10T13:53:48Z] Running step load-test-ci-10u-10t-go-ipi-conf-aws. �[36mINFO�[0m[2024-04-10T13:54:00Z] Step load-test-ci-10u-10t-go-ipi-conf-aws succeeded after 11s. �[36mINFO�[0m[2024-04-10T13:54:00Z] Running step load-test-ci-10u-10t-go-ipi-install-monitoringpvc. �[36mINFO�[0m[2024-04-10T13:54:08Z] Step load-test-ci-10u-10t-go-ipi-install-monitoringpvc succeeded after 8s. �[36mINFO�[0m[2024-04-10T13:54:08Z] Running step load-test-ci-10u-10t-go-ipi-install-rbac. �[36mINFO�[0m[2024-04-10T13:54:17Z] Step load-test-ci-10u-10t-go-ipi-install-rbac succeeded after 8s. �[36mINFO�[0m[2024-04-10T13:54:17Z] Running step load-test-ci-10u-10t-go-openshift-cluster-bot-rbac. �[36mINFO�[0m[2024-04-10T13:54:25Z] Step load-test-ci-10u-10t-go-openshift-cluster-bot-rbac succeeded after 8s. �[36mINFO�[0m[2024-04-10T13:54:25Z] Running step load-test-ci-10u-10t-go-ipi-install-hosted-loki. �[36mINFO�[0m[2024-04-10T13:54:34Z] Step load-test-ci-10u-10t-go-ipi-install-hosted-loki succeeded after 9s. �[36mINFO�[0m[2024-04-10T13:54:34Z] Running step load-test-ci-10u-10t-go-ipi-install-install. �[36mINFO�[0m[2024-04-10T13:54:55Z] Logs for container test in pod load-test-ci-10u-10t-go-ipi-install-install: �[36mINFO�[0m[2024-04-10T13:54:55Z] Installing from release registry.build05.ci.openshift.org/ci-op-k95y89v7/release@sha256:dd58c982a2166dcac5ce8f390f8b26b36df27ac765c4e012a670a9c0bac909df install-config.yaml ------------------- apiVersion: v1 metadata: name: ci-op-k95y89v7-27aba
Container test exited with code 3, reason Erroruayio-pull-through-cache-us-east-1-ci.apps.ci.l2s4.p1.openshiftapps.com
|
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
name: ci-op-k95y89v7-27aba | ||||
level=error msg= status code: 403, request id: 21b479f9-959f-436f-9031-52f59106d2aa, compute[0].platform.aws: Internal error: error listing instance types: fetching instance types: UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws:iam::992382442726:user/prow-service-account is not authorized to perform: ec2:DescribeInstanceTypes with an explicit deny in a service control policy | ||||
level=error msg= status code: 403, request id: 4d3cc0da-5454-4456-8504-d0851fdeb545] | ||||
{"component":"entrypoint","error":"wrapped process failed: exit status 3","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-10T13:54:55Z"} | ||||
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-appstudio&repo=e2e-tests&branch=main&test=load-test-ci-10u-10t-go, "load-test-ci-10u-10t-go" post steps failed: "load-test-ci-10u-10t-go" pod "load-test-ci-10u-10t-go-redhat-appstudio-gather" failed: could not watch pod: the pod ci-op-k95y89v7/load-test-ci-10u-10t-go-redhat-appstudio-gather failed after 7s (failed containers: test): ContainerFailed one or more containers exited | ||||
found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.627098 897 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.628723 875 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.643106 910 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.643249 917 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.646649 941 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.652578 945 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.660122 971 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.665259 961 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.666979 981 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.671361 1014 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.672288 995 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
W0410 13:55:02.674772 1008 loader.go:222] Config not found: /tmp/kubeconfig-2691735810 | ||||
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-10T13:55:02Z"} | ||||
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-appstudio&repo=e2e-tests&branch=main&test=load-test-ci-10u-10t-go] | ||||
�[36mINFO�[0m[2024-04-10T13:55:43Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:utilizing_lease:executing_test:executing_multi_stage_test' | ||||
apiVersion: v1 | ||||
name: ci-op-lshnkqqm-0d849 | ||||
uayio-pull-through-cache-us-east-1-ci.apps.ci.l2s4.p1.openshiftapps.com | ||||
source: quay.io | ||||
level=info msg=Credentials loaded from the "default" profile in file "/var/run/secrets/ci.openshift.io/cluster-profile/.awscred" | ||||
level=error msg=failed to fetch Master Machines: failed to load asset "Install Config": failed to create install config: [controlPlane.platform.aws: Internal error: error listing instance types: fetching instance types: UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws:iam::992382442726:user/prow-service-account is not authorized to perform: ec2:DescribeInstanceTypes with an explicit deny in a service control policy | ||||
level=error msg= status code: 403, request id: 1d4793f8-8020-472c-9145-f9f557537faf, compute[0].platform.aws: Internal error: error listing instance types: fetching instance types: UnauthorizedOperation: You are not authorized to perform this operation. User: arn:aws:iam::992382442726:user/prow-service-account is not authorized to perform: ec2:DescribeInstanceTypes with an explicit deny in a service control policy | ||||
level=error msg= status code: 403, request id: 266668b9-e60d-4c41-b73d-35fb5fa9e965] | ||||
Create manifests exit code: 3 | ||||
Tear down the backgroup process of copying kube config | ||||
Setup phase finished, prepare env for next steps | ||||
Copying log bundle... | ||||
Removing REDACTED info from log... | ||||
Unsupported cluster type 'aws' to collect machine IDs | ||||
Copying required artifacts to shared dir | ||||
cp: cannot stat '/tmp/installer/auth/kubeconfig': No such file or directory | ||||
cp: cannot stat '/tmp/installer/auth/kubeadmin-password': No such file or directory | ||||
cp: cannot stat '/tmp/installer/metadata.json': No such file or directory | ||||
{"component":"entrypoint","error":"wrapped process failed: exit status 3","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-10T10:37:39Z"} | ||||
error: failed to execute wrapped command: exit status 3 | ||||
Link to step on registry info site: https://steps.ci.openshift.org/reference/ipi-install-install | ||||
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-appstudio&repo=e2e-tests&branch=main&test=load-test-ci-10u-10t, "load-test-ci-10u-10t" post steps failed: "load-test-ci-10u-10t" pod "load-test-ci-10u-10t-redhat-appstudio-gather" failed: could not watch pod: the pod ci-op-lshnkqqm/load-test-ci-10u-10t-redhat-appstudio-gather failed after 7s (failed containers: test): ContainerFailed one or more containers exited | ||||
found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.887598 829 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.904073 907 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.906222 892 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.923886 931 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.925150 912 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.927113 886 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.936050 966 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.936432 944 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.945861 952 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.951535 991 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.953808 987 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
W0410 10:37:46.960241 1003 loader.go:222] Config not found: /tmp/kubeconfig-3197396966 | ||||
error: default cluster has no server defined | ||||
{"component":"entrypoint","error":"wrapped process failed: exit status 1","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-10T10:37:46Z"} | ||||
error: failed to execute wrapped command: exit status 1 | ||||
Link to step on registry info site: https://steps.ci.openshift.org/reference/redhat-appstudio-gather | ||||
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-appstudio&repo=e2e-tests&branch=main&test=load-test-ci-10u-10t] | ||||
�[36mINFO�[0m[2024-04-10T10:38:29Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:utilizing_lease:executing_test:executing_multi_stage_test' | ||||
name: subscription-toolchain-host-operator-latest | ||||
namespace: toolchain-host-operator | ||||
name: toolchain-host-operator | ||||
source: source-toolchain-host-operator-latest | ||||
sourceNamespace: toolchain-host-operator' | ||||
operatorgroup.operators.coreos.com/og-toolchain-host-operator created | ||||
subscription.operators.coreos.com/subscription-toolchain-host-operator-latest created |
- PARAMS='-crd toolchainconfigs.toolchain.dev.openshift.com -cs source-toolchain-host-operator-latest -n toolchain-host-operator -s subscription-toolchain-host-operator-latest'
- /tmp/wait-until-is-installed.sh -crd toolchainconfigs.toolchain.dev.openshift.com -cs source-toolchain-host-operator-latest -n toolchain-host-operator -s subscription-toolchain-host-operator-latest
Waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster...
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD toolchainconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
make run-cicd-script SCRIPT_PATH=scripts/add-cluster.sh SCRIPT_PARAMS="-t member -mn toolchain-member-operator -hn toolchain-host-operator "
curl -sSL https://raw.githubusercontent.com/codeready-toolchain/toolchain-cicd/master/scripts/add-cluster.sh > /tmp/add-cluster.sh && chmod +x /tmp/add-cluster.sh && OWNER_AND_BRANCH_LOCATION=codeready-toolchain/toolchain-cicd/master /tmp/add-cluster.sh -t member -mn toolchain-member-operator -hn toolchain-host-operator
serviceaccount/toolchaincluster-member created
role.rbac.authorization.k8s.io/toolchaincluster-member created
clusterrole.rbac.authorization.k8s.io/toolchaincluster-member-toolchain-member-operator-toolchaincluster created
clusterrolebinding.rbac.authorization.k8s.io/toolchaincluster-member-toolchain-member-operator-toolchaincluster created
rolebinding.rbac.authorization.k8s.io/toolchaincluster-member created
Getting member SA token
Creating member secret
secret/toolchaincluster-member-ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com created
Creating ToolchainCluster representation of member in host:
toolchaincluster.toolchain.dev.openshift.com/member-ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshi1 created
if [[ false == true ]]; then make run-cicd-script SCRIPT_PATH=scripts/add-cluster.sh SCRIPT_PARAMS="-t member -mn -hn toolchain-host-operator -mm 2 "; fi
make run-cicd-script SCRIPT_PATH=scripts/add-cluster.sh SCRIPT_PARAMS="-t host -mn toolchain-member-operator -hn toolchain-host-operator "
curl -sSL https://raw.githubusercontent.com/codeready-toolchain/toolchain-cicd/master/scripts/add-cluster.sh > /tmp/add-cluster.sh && chmod +x /tmp/add-cluster.sh && OWNER_AND_BRANCH_LOCATION=codeready-toolchain/toolchain-cicd/master /tmp/add-cluster.sh -t host -mn toolchain-member-operator -hn toolchain-host-operator
toolchain-host-operator
toolchain-member-operator
serviceaccount/toolchaincluster-host created
role.rbac.authorization.k8s.io/toolchaincluster-host created
rolebinding.rbac.authorization.k8s.io/toolchaincluster-host created
Getting host SA token
SA token retrieved
Using standard OpenShift certificate
Fetching information about the clusters
API endpoint retrieved: https://api.ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com:6443
Joining cluster name: ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com
API endpoint of the cluster it is joining to: https://api.ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com:6443
The cluster name it is joining to: ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com
Creating host secret
secret/toolchaincluster-host-ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift.com created
Creating ToolchainCluster representation of host in member:
toolchaincluster.toolchain.dev.openshift.com/host-ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshift1 created
if [[ false == true ]]; then make run-cicd-script SCRIPT_PATH=scripts/add-cluster.sh SCRIPT_PARAMS="-t host -mn -hn toolchain-host-operator -mm 2 "; fi
echo "Restart host operator pods so it can get the ToolchainCluster CRs while it's starting up".
Restart host operator pods so it can get the ToolchainCluster CRs while it's starting up.
pod "host-operator-controller-manager-69c688cb9-mqm6t" deleted
for MEMBER_NAME inoc get toolchaincluster -n toolchain-host-operator --no-headers -o custom-columns=":metadata.name"
; do
oc process -p TOOLCHAINCLUSTER_NAME=${MEMBER_NAME} -p SPACEPROVISIONERCONFIG_NAME=${MEMBER_NAME} -p SPACEPROVISIONERCONFIG_NS=toolchain-host-operator -f deploy/host-operator/default-spaceprovisionerconfig.yaml | oc apply -f -;
done
spaceprovisionerconfig.toolchain.dev.openshift.com/member-ci-op-6l8r6zsw-0d849.rhtap-perfscale.devcluster.openshi1 created
ignore if these resources already exist (nstemplatetiers may have already been created by operator)
oc create -f deploy/host-operator/appstudio-dev/ -n toolchain-host-operator
secret/host-operator-secret created
toolchainconfig.toolchain.dev.openshift.com/config created
patch toolchainconfig to prevent webhook deploy for 2nd member, a 2nd webhook deploy causes the webhook verification in e2e tests to fail
since e2e environment has 2 member operators running in the same cluster
for details on how the TOOLCHAINCLUSTER_NAME is composed see https://github.com/codeready-toolchain/toolchain-cicd/blob/master/scripts/add-cluster.sh
if [[ false == true ]]; then
TOOLCHAIN_CLUSTER_NAME=oc get toolchaincluster -n toolchain-host-operator --no-headers -o custom-columns=":metadata.name" | grep "2$"
;
if [[ -z ${TOOLCHAIN_CLUSTER_NAME} ]]; then
echo "ERROR: no ToolchainCluster for member 2 found";
exit 1;
fi;
echo "TOOLCHAIN_CLUSTER_NAME ${TOOLCHAIN_CLUSTER_NAME}";
echo "ENVIRONMENT appstudio-dev";
PATCH_FILE=/tmp/patch-toolchainconfig_04141248.json;
echo "{"spec":{"members":{"specificPerMemberCluster":{"${TOOLCHAIN_CLUSTER_NAME}":{"webhook":{"deploy":false},"webConsolePlugin":{"deploy":true},"environment":"appstudio-dev"}}}}}" > $PATCH_FILE;
oc patch toolchainconfig config -n toolchain-host-operator --type=merge --patch "$(cat $PATCH_FILE)";
fi;
echo "Restart host operator pods so that configuration referenced in main.go can get the updated ToolchainConfig CRs at startup"
Restart host operator pods so that configuration referenced in main.go can get the updated ToolchainConfig CRs at startup
oc delete pods --namespace toolchain-host-operator -l control-plane=controller-manager
pod "host-operator-controller-manager-69c688cb9-vxw9r" deleted
if it's not part of e2e test execution, then delete registration-service pods in case they already exist so that the ToolchainConfig will be reloaded
oc delete pods --namespace toolchain-host-operator -l name=registration-service || true
pod "registration-service-76dc79598c-ftjgw" deleted
pod "registration-service-76dc79598c-g8wh5" deleted
pod "registration-service-76dc79598c-nzmdn" deleted
make run-cicd-script SCRIPT_PATH=scripts/ci/manage-member-operator.sh SCRIPT_PARAMS="-po false -io true -mn toolchain-member-operator -qn codeready-toolchain-test -ds 04141248 -dl true "
make[3]: Entering directory '/tmp/toolchain-e2e'
which: no yamllint in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/go/bin:/cli:/tmp/bin)
running script from GH toolchain-cicd repo (using latest version in master)...
curl -sSL https://raw.githubusercontent.com/codeready-toolchain/toolchain-cicd/master/scripts/ci/manage-member-operator.sh > /tmp/manage-member-operator.sh && chmod +x /tmp/manage-member-operator.sh && OWNER_AND_BRANCH_LOCATION=codeready-toolchain/toolchain-cicd/master /tmp/manage-member-operator.sh -po false -io true -mn toolchain-member-operator -qn codeready-toolchain-test -ds 04141248 -dl true
- MANAGE_OPERATOR_FILE=scripts/ci/manage-operator.sh
- [[ -f scripts/ci/manage-operator.sh ]]
- [[ -f /go/src/github.com/codeready-toolchain/toolchain-cicd/scripts/ci/manage-operator.sh ]]
- source /dev/stdin
++ curl -sSL https://raw.githubusercontent.com/codeready-toolchain/toolchain-cicd/master/scripts/ci/manage-operator.sh
++ [[ -n true ]]
++ set -ex
++ WAS_ALREADY_PAIRED_FILE=/tmp/toolchain_e2e_already_paired
++ OWNER_AND_BRANCH_LOCATION=codeready-toolchain/toolchain-cicd/master - [[ true != \t\r\u\e ]]
- INDEX_IMAGE_LOC=quay.io/codeready-toolchain/member-operator-index:latest
- [[ true == \t\r\u\e ]]
- OPERATOR_NAME=toolchain-member-operator
- INDEX_IMAGE_NAME=member-operator-index
- NAMESPACE=toolchain-member-operator
- EXPECT_CRD=memberoperatorconfigs.toolchain.dev.openshift.com
- install_operator
++ echo toolchain-member-operator
++ tr - ' '
++ awk '{for (i=1;i<=NF;i++) $i=toupper(substr($i,1,1)) substr($i,2)} 1' - DISPLAYNAME='Toolchain Member Operator'
- [[ -z quay.io/codeready-toolchain/member-operator-index:latest ]]
- GIT_COMMIT_ID=latest
- INDEX_IMAGE=quay.io/codeready-toolchain/member-operator-index:latest
- CHANNEL=staging
- CATALOGSOURCE_NAME=source-toolchain-member-operator-latest
- SUBSCRIPTION_NAME=subscription-toolchain-member-operator-latest
++ oc get Subscription -n toolchain-member-operator -o name
++ grep subscription-toolchain-member-operator
++ oc get CatalogSource -n toolchain-member-operator -o name
++ grep source-toolchain-member-operator
++ oc get csv -n toolchain-member-operator -o name
++ grep toolchain-member-operator - [[ '' == \t\r\u\e ]]
- CATALOG_SOURCE_OBJECT='
kind: CatalogSource
name: source-toolchain-member-operator-latest
sourceType: grpc
image: quay.io/codeready-toolchain/member-operator-index:latest
displayName: Toolchain Member Operator
publisher: Red Hat
grpcPodConfig:
securityContextConfig: restricted
updateStrategy:
registryPoll:
interval: 1m0s' - echo 'objects to be created in order to create CatalogSource'
objects to be created in order to create CatalogSource
catalogsource.operators.coreos.com/source-toolchain-member-operator-latest created - echo 'Waiting until the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator gets ready'
Waiting until the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator gets ready - NEXT_WAIT_TIME=0
- [[ 0 -eq 100 ]]
- echo '0. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 1 -eq 100 ]]
- echo '1. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 2 -eq 100 ]]
- echo '2. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 3 -eq 100 ]]
- echo '3. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 4 -eq 100 ]]
- echo '4. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 5 -eq 100 ]]
- echo '5. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 6 -eq 100 ]]
- echo '6. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 7 -eq 100 ]]
- echo '7. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 8 -eq 100 ]]
- echo '8. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 9 -eq 100 ]]
- echo '9. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 10 -eq 100 ]]
- echo '10. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 11 -eq 100 ]]
- echo '11. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 12 -eq 100 ]]
- echo '12. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 13 -eq 100 ]]
- echo '13. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 14 -eq 100 ]]
- echo '14. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- [[ 15 -eq 100 ]]
- echo '15. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
++ true
- [[ -z '' ]]
- [[ 16 -eq 100 ]]
- echo '16. attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.'
- attempt (out of 100) of waiting for the CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator to be ready.
- sleep 1
++ oc get catalogsource source-toolchain-member-operator-latest -n toolchain-member-operator -o 'jsonpath=${.status.connectionState.lastObservedState}'
++ grep READY - [[ -z $READY ]]
- echo 'The CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator is ready - installing the operator'
The CatalogSource source-toolchain-member-operator-latest in the namespace toolchain-member-operator is ready - installing the operator - INSTALL_OBJECTS='apiVersion: operators.coreos.com/v1
kind: OperatorGroup
name: og-toolchain-member-operator
targetNamespaces:
- toolchain-member-operator
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: subscription-toolchain-member-operator-latest
namespace: toolchain-member-operator
spec:
channel: staging
installPlanApproval: Automatic
name: toolchain-member-operator
source: source-toolchain-member-operator-latest
sourceNamespace: toolchain-member-operator'
- echo 'objects to be created in order to install operator'
objects to be created in order to install operator - cat
- oc apply -f -
operatorgroup.operators.coreos.com/og-toolchain-member-operator created
subscription.operators.coreos.com/subscription-toolchain-member-operator-latest created - wait_until_is_installed
- WAIT_UNTIL_IS_INSTALLED=scripts/ci/wait-until-is-installed.sh
- PARAMS='-crd memberoperatorconfigs.toolchain.dev.openshift.com -cs source-toolchain-member-operator-latest -n toolchain-member-operator -s subscription-toolchain-member-operator-latest'
- [[ -f scripts/ci/wait-until-is-installed.sh ]]
- [[ -f /go/src/github.com/codeready-toolchain/toolchain-cicd/scripts/ci/wait-until-is-installed.sh ]]
++ basename scripts/ci/wait-until-is-installed.sh - SCRIPT_NAME=wait-until-is-installed.sh
- curl -sSL https://raw.githubusercontent.com/codeready-toolchain/toolchain-cicd/master/scripts/ci/wait-until-is-installed.sh
- chmod +x /tmp/wait-until-is-installed.sh
- OWNER_AND_BRANCH_LOCATION=codeready-toolchain/toolchain-cicd/master
- /tmp/wait-until-is-installed.sh -crd memberoperatorconfigs.toolchain.dev.openshift.com -cs source-toolchain-member-operator-latest -n toolchain-member-operator -s subscription-toolchain-member-operator-latest
Waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster...
- attempt (out of 200) of waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- attempt (out of 200) of waiting for CRD memberoperatorconfigs.toolchain.dev.openshift.com to be available in the cluster and the InstallPlan to be complete
- [[ -n '' ]]
make[3]: Leaving directory '/tmp/toolchain-e2e'
Operators are successfuly deployed using the appstudio-dev environment.
Unseal Progress 1/3
Key Value
Seal Type shamir
Initialized true
Sealed true
Total Shares 5
Threshold 3
Unseal Progress 2/3
Unseal Nonce 23e151d9-3b0c-81e1-5382-17efb7ba869e
Version 1.11.3
Build Date 2022-08-26T10:27:10Z
Storage Type file
HA Enabled false
unsealing ...
Key Value
Seal Type shamir
Initialized true
Sealed false
Total Shares 5
Threshold 3
Version 1.11.3
Build Date 2022-08-26T10:27:10Z
Storage Type file
Cluster Name vault-cluster-5f8ed37f
Cluster ID c3dc7026-4919-bd17-7c59-4a4a0f25c84b
HA Enabled false
unsealed
secret/vault-root-token created
enabling audit log ...
Success! Enabled the file audit device at: file/
creating SPI secret engine ...
Success! Enabled the kv-v2 secrets engine at: spi/
Success! Uploaded policy: spi
Vault initialization was completed
Initializing SPI
setup kubernetes authentication ...
Success! Enabled kubernetes auth method at: kubernetes/
Success! Data written to: auth/kubernetes/role/spi-controller-manager
Success! Data written to: auth/kubernetes/role/spi-oauth
Success! Data written to: auth/kubernetes/config
setup approle authentication ...
Success! Enabled approle auth method at: approle/
Success! Data written to: auth/approle/role/spi-operator
Success! Data written to: auth/approle/role/spi-oauth
secret/vault-approle-spi-operator created
secret/vault-approle-spi-oauth created
SPI initialization was completed
Initializing remote secret controller
Success! Data written to: auth/approle/role/remote-secret-operator
secret yaml with Vault credentials prepared
restarting vault pod 'vault-0' ...
secret/vault-approle-remote-secret-operator created
Remote secret controller initialization was completed
namespace/openshift-pipelines configured
namespace/build-service configured
namespace/integration-service configured
secret/pipelines-as-code-secret created
Configured pipelines-as-code-secret secret in openshift-pipelines namespace
application.argoproj.io/postgres patched
application.argoproj.io/multi-platform-controller-in-cluster-local patched
application.argoproj.io/internal-services-in-cluster-local patched
application.argoproj.io/toolchain-host-operator-in-cluster-local patched
application.argoproj.io/application-api-in-cluster-local patched
application.argoproj.io/enterprise-contract-in-cluster-local patched
application.argoproj.io/perf-team-prometheus-reader-in-cluster-local patched
application.argoproj.io/integration-in-cluster-local patched
application.argoproj.io/ingresscontroller-in-cluster-local patched
application.argoproj.io/build-templates-in-cluster-local patched
application.argoproj.io/gitops-in-cluster-local patched
application.argoproj.io/dev-sso-in-cluster-local patched
application.argoproj.io/all-application-sets patched
application.argoproj.io/spi-vault-in-cluster-local patched
application.argoproj.io/dora-metrics-in-cluster-local patched
application.argoproj.io/monitoring-workload-prometheus-in-cluster-local patched
application.argoproj.io/has-in-cluster-local patched
application.argoproj.io/disable-csvcopy-in-cluster-local patched
application.argoproj.io/image-controller-in-cluster-local patched
application.argoproj.io/toolchain-member-operator-in-cluster-local patched
application.argoproj.io/remote-secret-controller-in-cluster-local patched
application.argoproj.io/spi-in-cluster-local patched
application.argoproj.io/jvm-build-service-in-cluster-local patched
application.argoproj.io/project-controller-in-cluster-local patched
application.argoproj.io/repository-validator-in-cluster-local patched
application.argoproj.io/monitoring-workload-grafana-in-cluster-local patched
application.argoproj.io/build-service-in-cluster-local patched
application.argoproj.io/release-in-cluster-local patched
application.argoproj.io/pipeline-service-in-cluster-local patched
postgres OutOfSync Healthy
Waiting 10 seconds for application sync
build-service-in-cluster-local Synced Progressing
gitops-in-cluster-local Synced Progressing
jvm-build-service-in-cluster-local Unknown Healthy
monitoring-workload-grafana-in-cluster-local OutOfSync Missing
multi-platform-controller-in-cluster-local OutOfSync Missing
pipeline-service-in-cluster-local OutOfSync Missing
jvm-build-service-in-cluster-local failed with:
[{"lastTransitionTime":"2024-04-04T14:18:48Z","message":"rpc error: code = Unknown desc =kustomize build .components/jvm-build-service/development
failed exit status 1: Error: accumulating resources: accumulation err='accumulating resources from '../base': '.components/jvm-build-service/base' must resolve to a file': recursed accumulation of path '.components/jvm-build-service/base': accumulating resources: accumulation err='accumulating resources from 'https://github.com/redhat-appstudio/jvm-build-service/deploy/operator/config?ref=cac2c46771e4ce11554e7032b90aab221d928645': URL is a git repository': hit 27s timeout running '/usr/bin/git fetch --depth=1 https://github.com/redhat-appstudio/jvm-build-service cac2c46771e4ce11554e7032b90aab221d928645'","type":"ComparisonError"}]
Switched to branch 'main'
Your branch is up to date with 'origin/main'.
Error: error when bootstrapping cluster: exit status 1
make: *** [Makefile:40: local/cluster/prepare] Error 1
To avoid naming collisions adding PR number to USER_PREFIX: 'ci10t' -> 'ci10t-1075'
Collecting load test results
Collecting Application timestamps...
Collecting ComponentDetectionQuery timestamps...
Collecting Component timestamps...
Collecting PipelineRun timestamps...
Collecting application service log segments per user app...
Defaulted container "manager" out of: manager, kube-rbac-proxy
Error summary:
WARNING: File /logs/artifacts/load-tests.log not found!
Setting up tool to collect monitoring data...
Requirement already satisfied: pip in ./venv/lib/python3.9/site-packages (21.2.3)
Collecting pip
Downloading pip-24.0-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.2.3
Uninstalling pip-21.2.3:
Successfully uninstalled pip-21.2.3
Successfully installed pip-24.0
Obtaining opl-rhcloud-perf-team-core from git+https://github.com/redhat-performance/opl.git#egg=opl-rhcloud-perf-team-core&subdirectory=core
Cloning https://github.com/redhat-performance/opl.git to ./venv/src/opl-rhcloud-perf-team-core
Running command git clone --filter=blob:none --quiet https://github.com/redhat-performance/opl.git /tmp/tmp.zF97XHDnAc/venv/src/opl-rhcloud-perf-team-core
Resolved https://github.com/redhat-performance/opl.git to commit df1b3555a43c37e15c86441081fe07286df7a499
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Checking if build backend supports build_editable: started
Checking if build backend supports build_editable: finished with status 'done'
Getting requirements to build editable: started
Getting requirements to build editable: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing editable metadata (pyproject.toml): started
Preparing editable metadata (pyproject.toml): finished with status 'done'
Collecting Jinja2>=3.0 (from opl-rhcloud-perf-team-core)
Downloading Jinja2-3.1.3-py3-none-any.whl.metadata (3.3 kB)
Collecting boto3 (from opl-rhcloud-perf-team-core)
Downloading boto3-1.34.77-py3-none-any.whl.metadata (6.6 kB)
Collecting junitparser (from opl-rhcloud-perf-team-core)
Downloading junitparser-3.1.2-py2.py3-none-any.whl.metadata (9.0 kB)
Collecting PyYAML (from opl-rhcloud-perf-team-core)
Downloading PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.1 kB)
Collecting requests (from opl-rhcloud-perf-team-core)
Downloading requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting tabulate (from opl-rhcloud-perf-team-core)
Downloading tabulate-0.9.0-py3-none-any.whl.metadata (34 kB)
Collecting deepdiff (from opl-rhcloud-perf-team-core)
Downloading deepdiff-6.7.1-py3-none-any.whl.metadata (6.1 kB)
Collecting tenacity (from opl-rhcloud-perf-team-core)
Downloading tenacity-8.2.3-py3-none-any.whl.metadata (1.0 kB)
Collecting MarkupSafe>=2.0 (from Jinja2>=3.0->opl-rhcloud-perf-team-core)
Downloading MarkupSafe-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting botocore<1.35.0,>=1.34.77 (from boto3->opl-rhcloud-perf-team-core)
Downloading botocore-1.34.77-py3-none-any.whl.metadata (5.7 kB)
Collecting jmespath<2.0.0,>=0.7.1 (from boto3->opl-rhcloud-perf-team-core)
Downloading jmespath-1.0.1-py3-none-any.whl.metadata (7.6 kB)
Collecting s3transfer<0.11.0,>=0.10.0 (from boto3->opl-rhcloud-perf-team-core)
Downloading s3transfer-0.10.1-py3-none-any.whl.metadata (1.7 kB)
Collecting ordered-set<4.2.0,>=4.0.2 (from deepdiff->opl-rhcloud-perf-team-core)
Downloading ordered_set-4.1.0-py3-none-any.whl.metadata (5.3 kB)
Collecting charset-normalizer<4,>=2 (from requests->opl-rhcloud-perf-team-core)
Downloading charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->opl-rhcloud-perf-team-core)
Downloading idna-3.6-py3-none-any.whl.metadata (9.9 kB)
Downloading urllib3-2.2.1-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->opl-rhcloud-perf-team-core)
Downloading certifi-2024.2.2-py3-none-any.whl.metadata (2.2 kB)
Collecting python-dateutil<3.0.0,>=2.1 (from botocore<1.35.0,>=1.34.77->boto3->opl-rhcloud-perf-team-core)
Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting urllib3<3,>=1.21.1 (from requests->opl-rhcloud-perf-team-core)
Downloading urllib3-1.26.18-py2.py3-none-any.whl.metadata (48 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 7.8 MB/s eta 0:00:00
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore<1.35.0,>=1.34.77->boto3->opl-rhcloud-perf-team-core)
Downloading six-1.16.0-py2.py3-none-any.whl.metadata (1.8 kB)
Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.2/133.2 kB 14.3 MB/s eta 0:00:00
Downloading boto3-1.34.77-py3-none-any.whl (139 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 139.3/139.3 kB 23.1 MB/s eta 0:00:00
Downloading deepdiff-6.7.1-py3-none-any.whl (76 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 76.6/76.6 kB 11.9 MB/s eta 0:00:00
Downloading junitparser-3.1.2-py2.py3-none-any.whl (13 kB)
Downloading PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (738 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 738.9/738.9 kB 56.4 MB/s eta 0:00:00
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 9.3 MB/s eta 0:00:00
Downloading tabulate-0.9.0-py3-none-any.whl (35 kB)
Downloading tenacity-8.2.3-py3-none-any.whl (24 kB)
Downloading botocore-1.34.77-py3-none-any.whl (12.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.1/12.1 MB 105.5 MB/s eta 0:00:00
Downloading certifi-2024.2.2-py3-none-any.whl (163 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 163.8/163.8 kB 21.2 MB/s eta 0:00:00
Downloading charset_normalizer-3.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 142.3/142.3 kB 21.0 MB/s eta 0:00:00
Downloading idna-3.6-py3-none-any.whl (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.6/61.6 kB 9.5 MB/s eta 0:00:00
Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Downloading MarkupSafe-2.1.5-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading ordered_set-4.1.0-py3-none-any.whl (7.6 kB)
Downloading s3transfer-0.10.1-py3-none-any.whl (82 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 82.2/82.2 kB 14.4 MB/s eta 0:00:00
Downloading urllib3-1.26.18-py2.py3-none-any.whl (143 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 143.8/143.8 kB 21.6 MB/s eta 0:00:00
Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 26.6 MB/s eta 0:00:00
Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
Building wheels for collected packages: opl-rhcloud-perf-team-core
Building editable for opl-rhcloud-perf-team-core (pyproject.toml): started
Building editable for opl-rhcloud-perf-team-core (pyproject.toml): finished with status 'done'
Created wheel for opl-rhcloud-perf-team-core: filename=opl_rhcloud_perf_team_core-0.0.1-0.editable-py3-none-any.whl size=4726 sha256=39396be58ba921eeb215ec5dc85ee3904b80975c3ce3ab6a6207242d75b39a14
Stored in directory: /tmp/pip-ephem-wheel-cache-evkpkt3k/wheels/51/e9/9e/954ec787afcc853cfa60dd19a6a7ae241ca5689d1d11249ddf
Successfully built opl-rhcloud-perf-team-core
Installing collected packages: junitparser, urllib3, tenacity, tabulate, six, PyYAML, ordered-set, MarkupSafe, jmespath, idna, charset-normalizer, certifi, requests, python-dateutil, Jinja2, deepdiff, botocore, s3transfer, boto3, opl-rhcloud-perf-team-core
Successfully installed Jinja2-3.1.3 MarkupSafe-2.1.5 PyYAML-6.0.1 boto3-1.34.77 botocore-1.34.77 certifi-2024.2.2 charset-normalizer-3.3.2 deepdiff-6.7.1 idna-3.6 jmespath-1.0.1 junitparser-3.1.2 opl-rhcloud-perf-team-core-0.0.1 ordered-set-4.1.0 python-dateutil-2.9.0.post0 requests-2.31.0 s3transfer-0.10.1 six-1.16.0 tabulate-0.9.0 tenacity-8.2.3 urllib3-1.26.18
Collecting monitoring data...
WARNING: File /logs/artifacts/load-tests.json not found!
Collecting node specs
Collecting pod distribution over nodes
Installing Tekton Artifact Performance Analysis (tapa)
Cloning into './tapa.git'...
/tmp/tmp.zF97XHDnAc/tapa.git /tmp/tmp.zF97XHDnAc /tmp/tmp.zF97XHDnAc
go: downloading github.com/spf13/cobra v1.7.0
go: downloading k8s.io/api v0.25.9
go: downloading k8s.io/client-go v0.25.9
go: downloading github.com/tektoncd/pipeline v0.47.3
go: downloading knative.dev/pkg v0.0.0-20230221145627-8efb3485adcf
go: downloading github.com/spf13/pflag v1.0.5
go: downloading k8s.io/apimachinery v0.26.4
go: downloading github.com/google/go-cmp v0.5.9
go: downloading github.com/google/gofuzz v1.2.0
go: downloading sigs.k8s.io/yaml v1.3.0
go: downloading sigs.k8s.io/structured-merge-diff/v4 v4.2.3
go: downloading k8s.io/klog/v2 v2.90.1
go: downloading github.com/google/go-containerregistry v0.14.0
go: downloading golang.org/x/exp v0.0.0-20230307190834-24139beb5833
go: downloading k8s.io/utils v0.0.0-20230209194617-a36077c30491
go: downloading github.com/davecgh/go-spew v1.1.1
go: downloading github.com/emicklei/go-restful/v3 v3.9.0
go: downloading github.com/go-openapi/jsonreference v0.20.1
go: downloading github.com/go-openapi/swag v0.22.3
go: downloading github.com/google/gnostic v0.6.9
go: downloading github.com/stretchr/testify v1.8.2
go: downloading golang.org/x/tools v0.7.0
go: downloading google.golang.org/protobuf v1.30.0
go: downloading github.com/go-logr/logr v1.2.3
go: downloading go.uber.org/zap v1.24.0
go: downloading golang.org/x/net v0.9.0
go: downloading contrib.go.opencensus.io/exporter/prometheus v0.4.0
go: downloading go.uber.org/atomic v1.10.0
go: downloading google.golang.org/grpc v1.54.0
go: downloading github.com/onsi/ginkgo/v2 v2.4.0
go: downloading github.com/onsi/gomega v1.23.0
go: downloading github.com/go-openapi/jsonpointer v0.19.6
�[36mINFO�[0m[2024-04-04T14:20:13Z] Step load-test-ci-10u-10t-redhat-appstudio-load-test failed after 13m44s.
�[36mINFO�[0m[2024-04-04T14:20:13Z] Step phase test failed after 13m44s.
�[36mINFO�[0m[2024-04-04T14:20:13Z] Running multi-stage phase post
�[36mINFO�[0m[2024-04-04T14:20:13Z] Running step load-test-ci-10u-10t-redhat-appstudio-gather.
�[36mINFO�[0m[2024-04-04T14:24:45Z] Step load-test-ci-10u-10t-redhat-appstudio-gather succeeded after 4m31s.
�[36mINFO�[0m[2024-04-04T14:24:45Z] Running step load-test-ci-10u-10t-gather-aws-console.
�[36mINFO�[0m[2024-04-04T14:25:51Z] Step load-test-ci-10u-10t-gather-aws-console succeeded after 1m6s.
�[36mINFO�[0m[2024-04-04T14:25:51Z] Running step load-test-ci-10u-10t-gather-must-gather.
�[36mINFO�[0m[2024-04-04T14:28:00Z] Step load-test-ci-10u-10t-gather-must-gather succeeded after 2m8s.
�[36mINFO�[0m[2024-04-04T14:28:00Z] Running step load-test-ci-10u-10t-gather-extra.
�[36mINFO�[0m[2024-04-04T14:32:25Z] Step load-test-ci-10u-10t-gather-extra succeeded after 4m25s.
�[36mINFO�[0m[2024-04-04T14:32:25Z] Running step load-test-ci-10u-10t-gather-audit-logs.
�[36mINFO�[0m[2024-04-04T14:33:36Z] Step load-test-ci-10u-10t-gather-audit-logs succeeded after 1m10s.
�[36mINFO�[0m[2024-04-04T14:33:36Z] Running step load-test-ci-10u-10t-ipi-deprovision-deprovision.
�[36mINFO�[0m[2024-04-04T14:37:56Z] Step load-test-ci-10u-10t-ipi-deprovision-deprovision succeeded after 4m20s.
�[36mINFO�[0m[2024-04-04T14:37:56Z] Step phase post succeeded after 17m43s.
�[36mINFO�[0m[2024-04-04T14:37:56Z] Releasing leases for test load-test-ci-10u-10t
�[36mINFO�[0m[2024-04-04T14:37:57Z] Ran for 1h13m22s
�[31mERRO�[0m[2024-04-04T14:37:57Z] Some steps failed:
�[31mERRO�[0m[2024-04-04T14:37:57Z]
- could not run steps: step load-test-ci-10u-10t failed: "load-test-ci-10u-10t" test steps failed: "load-test-ci-10u-10t" pod "load-test-ci-10u-10t-redhat-appstudio-load-test" failed: could not watch pod: the pod ci-op-6l8r6zsw/load-test-ci-10u-10t-redhat-appstudio-load-test failed after 11m36s (failed containers: test): ContainerFailed one or more containers exited
v0.19.6
go: downloading github.com/pmezard/go-difflib v1.0.0
go: downloading github.com/evanphx/json-patch/v5 v5.6.0
go: downloading gomodules.xyz/jsonpatch/v2 v2.2.0
go: downloading golang.org/x/sync v0.1.0
go: downloading go.uber.org/multierr v1.8.0
go: downloading go.uber.org/goleak v1.2.0
go: downloading google.golang.org/api v0.116.0
go: downloading github.com/prometheus/client_golang v1.13.0
go: downloading github.com/prometheus/statsd_exporter v0.21.0
go: downloading golang.org/x/time v0.3.0
go: downloading github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3
go: downloading github.com/google/uuid v1.3.0
go: downloading github.com/hashicorp/golang-lru v0.5.4
go: downloading github.com/kr/pretty v0.2.1
go: downloading github.com/evanphx/json-patch v4.12.0+incompatible
go: downloading github.com/benbjohnson/clock v1.1.0
go: downloading github.com/go-kit/log v0.2.0
go: downloading github.com/prometheus/client_model v0.3.0
go: downloading github.com/prometheus/common v0.37.0
go: downloading github.com/prometheus/procfs v0.8.0
go: downloading golang.org/x/sys v0.7.0
go: downloading google.golang.org/genproto v0.0.0-20230331144136-dcfb400f0633
go: downloading golang.org/x/term v0.7.0
go: downloading golang.org/x/oauth2 v0.7.0
go: downloading golang.org/x/text v0.9.0
go: downloading github.com/go-logfmt/logfmt v0.5.1
go: downloading github.com/matttproud/golang_protobuf_extensions v1.0.4
go: downloading github.com/imdario/mergo v0.3.13
go: downloading github.com/golang/glog v1.0.0
go: downloading golang.org/x/mod v0.9.0
go: downloading google.golang.org/appengine v1.6.7
/tmp/tmp.zF97XHDnAc /tmp/tmp.zF97XHDnAc
Running Tekton Artifact Performance Analysis
/tmp/tmp.zF97XHDnAc
{"component":"entrypoint","error":"wrapped process failed: exit status 2","file":"k8s.io/test-infra/prow/entrypoint/run.go:84","func":"k8s.io/test-infra/prow/entrypoint.Options.internalRun","level":"error","msg":"Error executing test process","severity":"error","time":"2024-04-04T14:20:12Z"}
error: failed to execute wrapped command: exit status 2
Link to step on registry info site: https://steps.ci.openshift.org/reference/redhat-appstudio-load-test
Link to job on registry info site: https://steps.ci.openshift.org/job?org=redhat-appstudio&repo=e2e-tests&branch=main&test=load-test-ci-10u-10t
�[36mINFO�[0m[2024-04-04T14:37:57Z] Reporting job state 'failed' with reason 'executing_graph:step_failed:utilizing_lease:executing_test:executing_multi_stage_test'
ci/prow/load-test-ci-10u-10t-go | 0ab48ae | link | true |/test load-test-ci-10u-10t-go
ci/prow/load-test-ci-10u-10t-java | 995aa83 | link | true |/test load-test-ci-10u-10t-java
ci/prow/load-test-ci-10u-10t | 9469767 | link | true |/test load-test-ci-10u-10t
ci/prow/images | 98cce9a | link | true |/test images
ci/prow/redhat-appstudio-e2e | 98cce9a | link | true |/test redhat-appstudio-e2e
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
Hello all! Closing this one as mine #1092 contains commit from this PR. |
Description
Randomized applicationName, itsName, ComponentDetectionQueryName and componentName to support multiple application and components and other resources if needed to support
https://issues.redhat.com/browse/KONFLUX-1007 - Consider testing max concurrency one namespace can handle
Issue ticket number and link
https://issues.redhat.com/browse/KONFLUX-2084
Type of change
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Checklist: