Cadence cluster networking configuration #4356
longquanzheng
started this conversation in
General
Replies: 1 comment 5 replies
-
Hello to continue of discussion: /paser-staging-cadence-frontend/~$: nslookup paser-staging-cadence-frontend
Server: 10.152.183.10
Address: 10.152.183.10:53
** server can't find paser-staging-cadence-frontend.cluster.local: NXDOMAIN
** server can't find paser-staging-cadence-frontend.svc.cluster.local: NXDOMAIN
** server can't find paser-staging-cadence-frontend.cluster.local: NXDOMAIN
*** Can't find paser-staging-cadence-frontend.paser-staging.svc.cluster.local: No answer
** server can't find paser-staging-cadence-frontend.svc.cluster.local: NXDOMAIN
Name: paser-staging-cadence-frontend.paser-staging.svc.cluster.local
Address: 10.152.183.33 Domain name resolved properly because domain description works. /paser-staging-cadence-frontend/~$: cadence d desc
Name: paser
UUID: dcc71fdb-44eb-41f7-940d-96df1c1e59d3
Description:
OwnerEmail:
DomainData: map[]
Status: REGISTERED
RetentionInDays: 365
EmitMetrics: false
ActiveClusterName: active
Clusters: active
HistoryArchivalStatus: DISABLED
VisibilityArchivalStatus: DISABLED
Bad binaries to reset:
+-----------------+----------+------------+--------+
| BINARY CHECKSUM | OPERATOR | START TIME | REASON |
+-----------------+----------+------------+--------+
+-----------------+----------+------------+--------+ Other services also work properly. A problem occurs only while I try to update the domain (in other cases, not tested). At the beginning (around two years ago), we created a cluster with retention days 365, but now the database is too big, and our team decided to decrease retention days to 30 as a maximal value for new clusters. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
NOTE: Please leave your questions below and we can improve on this topic
Here uses configuration in 0.21.0 as example. But the other versions should be the same.
Single server cluster without replication
Ringpop configuration
Ringpop is to connect all the nodes within the same cluster.
To verify the DNS, you should be able to use
nslookup
command to see all the IPs behind it.publicClient configuration
It's for server worker service to connect to frontend.
To verify the DNS, you should be able to use
nslookup
command to see all the IPs behind it.See related discussion
clusterMetadata.clusterInformation.<currentClusterName>.rpcAddress
configurationService.RPC configuration
service.rpc is to configure the service to listen on certain IP and port.
Multiple Server clusters with XDC(global domain) replication group
XDC(global domain) is a replication group of multiple cadence clusters.
Based on the "Single server cluster without replication", clusterMetadata needs to be updated to configure multiple clusters to connect to each other.
Supposed we have two clusters, clusterA and clusterB
Config of clusterA should be:
Config of clusterB should be:
Some rules of the config:
Client worker
Client worker is the worker that is running your workflow and activity code.
Related discussions
#4143
#3150
#3054
banzaicloud/banzai-charts#1278
Beta Was this translation helpful? Give feedback.
All reactions