You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During integration testing, I deploy a single-node redpanda cluster and have noticed that sometimes the deployment takes around 20 minutes, while it usually completes in 4-7 minutes.
I observed the following strange behavior in the helmrelease status:
$ kubectl -n redpanda get helmrelease -w
NAME AGE READY STATUS
neo4j-cdc 3m42s False Could not load chart: failed to parse digest '': invalid checksum digest format
neo4j-cdc 4m5s False Could not load chart: GET http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz giving up after 10 attempt(s): Get "http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz": dial tcp 10.43.82.103:80: connect: connection refused
neo4j-cdc 4m5s False Could not load chart: failed to parse digest '': invalid checksum digest format
neo4j-cdc 6m23s Unknown Running 'install' action with timeout of 1m0s
neo4j-cdc 6m23s Unknown Running 'install' action with timeout of 1m0s
neo4j-cdc 7m5s True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
neo4j-cdc 7m35s True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
....
neo4j-cdc 34m False Could not load chart: GET http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz giving up after 10 attempt(s): Get "http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz": dial tcp 10.43.82.103:80: connect: connection refused
neo4j-cdc 34m False Could not load chart: GET http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz giving up after 10 attempt(s): Get "http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz": dial tcp 10.43.82.103:80: connect: connection refused
neo4j-cdc 39m True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
neo4j-cdc 39m True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
neo4j-cdc 39m False failed to verify artifact: computed checksum 'efe3fd90bce319c79f480e13ef5ce5543cbda4850863e07c7773b363a4116c6c' doesn't match advertised ''
neo4j-cdc 43m False Could not load chart: GET http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz giving up after 10 attempt(s): Get "http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz": dial tcp 10.43.82.103:80: connect: connection refused
neo4j-cdc 48m True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
neo4j-cdc 48m True Helm install succeeded for release redpanda/neo4j-cdc.v1 with chart [email protected]
I tried debugging the issue, specifically the error:
connection refused http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz giving up after 10 attempt(s): Get "http://source-controller.flux-system.svc.cluster.local./helmchart/redpanda/redpanda-neo4j-cdc/redpanda-5.9.5.tgz"
To my surprise, the source-controller service exists, but inside the Kubernetes pod, no process is listening on the specified port.
kubectl -n flux-system get svc source-controller -o jsonpath='{.spec.ports}'
[
{
"name": "http",
"port": 80,
"protocol": "TCP",
"targetPort": "http"
}
]
$ kubectl -n flux-system exec -ti deployment/source-controller -- netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/source-controller
tcp 0 0 :::9090 :::* LISTEN 1/source-controller
tcp 0 0 :::9440 :::* LISTEN 1/source-controller
Also, i found the strange errors in flux helm deployment
I don't fully understand where the issue is. Maybe it is related to Flux? I'm using the standard Helm chart without any custom parameters to install it.
During integration testing, I deploy a single-node redpanda cluster and have noticed that sometimes the deployment takes around 20 minutes, while it usually completes in 4-7 minutes.
After applying cluster crd manifest:
I observed the following strange behavior in the helmrelease status:
I tried debugging the issue, specifically the error:
To my surprise, the source-controller service exists, but inside the Kubernetes pod, no process is listening on the specified port.
Also, i found the strange errors in flux helm deployment
I don't fully understand where the issue is. Maybe it is related to Flux? I'm using the standard Helm chart without any custom parameters to install it.
Environments:
Node Configurations:
single-node k8s, 6 CPU, 16 GB RAM
single-node k8s, 14 CPU, 16 GB RAM
Kubernetes Versions:
k3s: 1.29.X, 1.30.X
Flux Chart Version: 2.12.4, 2.3.0
Redpanda-Operator Chart Version: 0,4.20, 0.4.21, 0.4.27 (with image-tag: v2.2.2-24.2.4)
The text was updated successfully, but these errors were encountered: