You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi everyone,
I have 5 MariaDB pods deployed in Kubernetes: pod-0, pod-1, and pod-2 are in DC-A, while pod-3 and pod-4 are in DC-B.
I've also deployed MaxScale in the same K8s cluster within DC-A. (Eventually, I plan to implement HA with two MaxScale pods in each data center)
My goal is to keep the MariaDB master node in DC-A whenever possible. Only if DC-A goes down should the master fail over to the pods in DC-B. Ideally, both reads and writes should happen from DC-A unless all pods in DC-A are unavailable.
I've tried setting rankings by assigning rank=primary to pods in DC-A (pod-0 to pod-2) and rank=secondaryto pods in DC-B (pod-3 to pod-4). However, this isn't working as expected—when I manually failover the master to DC-B, it doesn't fail back to DC-A, even though the ranking configuration should prioritize DC-A as the primary.
Below is my configuration file. Could you please review it and suggest any modifications that would help me achieve the desired setup?
maxscale.cnf: | ######################## ## Server list ######################## [server1] type = server address = mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend rank = secondary [server2] type = server address = mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend rank = secondary [server3] type = server address = mariadb-sts-2.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend rank = secondary [server4] type = server address = mariadb-sts-3.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend rank = primary [server5] type = server address = mariadb-sts-4.mariadb-service.kaizen.svc.cluster.local port = 3306 protocol = MariaDBBackend rank = primary ######################### ## MaxScale configuration ######################### [maxscale] threads = auto log_augmentation = 1 ms_timestamp = 1 syslog = 1 admin_host=0.0.0.0 admin_secure_gui=false ######################### # Monitor for the servers ######################### [MariaDB-Monitor] type = monitor module = mariadbmon servers = server4,server3,server5,server1,server2 user = root password = secret monitor_interval=5s replication_user=root replication_password=secret auto_failover = true auto_rejoin = true enforce_read_only_slaves = 1 backend_connect_timeout=2s backend_write_timeout=2s backend_read_timeout=2s backend_connect_attempts=1 master_conditions=connected_slave,running_slave failcount=2 switchover_timeout=20s failover_timeout=20s ######################### ## Service definitions for read/write splitting and read-only services. ######################### [Read-Write-Service] type = service router = readwritesplit servers = server4,server3,server5,server1,server2 user = root password = secret max_slave_connections = 100 max_sescmd_history = 1500 causal_reads = true causal_reads_timeout = 10s transaction_replay = true transaction_replay_max_size = 1Mi delayed_retry = true master_reconnection = true master_failure_mode = fail_on_write max_slave_replication_lag = 3s enable_root_user = true [Read-Only-Service] type = service router = readconnroute servers = server4,server3,server5,server1,server2 router_options = slave user = root password = secret enable_root_user = true ########################## ## Listener definitions for the service ## Listeners represent the ports the service will listen on. ########################## [Read-Write-Listener] type = listener service = Read-Write-Service protocol = MariaDBClient port = 4008 [Read-Only-Listener] type = listener service = Read-Only-Service protocol = MariaDBClient port = 4006 ```
The text was updated successfully, but these errors were encountered:
Hi everyone,
I have 5 MariaDB pods deployed in Kubernetes: pod-0, pod-1, and pod-2 are in DC-A, while pod-3 and pod-4 are in DC-B.
I've also deployed MaxScale in the same K8s cluster within DC-A. (Eventually, I plan to implement HA with two MaxScale pods in each data center)
My goal is to keep the MariaDB master node in DC-A whenever possible. Only if DC-A goes down should the master fail over to the pods in DC-B. Ideally, both reads and writes should happen from DC-A unless all pods in DC-A are unavailable.
I've tried setting rankings by assigning rank=primary to pods in DC-A (pod-0 to pod-2) and rank=secondaryto pods in DC-B (pod-3 to pod-4). However, this isn't working as expected—when I manually failover the master to DC-B, it doesn't fail back to DC-A, even though the ranking configuration should prioritize DC-A as the primary.
Below is my configuration file. Could you please review it and suggest any modifications that would help me achieve the desired setup?
The text was updated successfully, but these errors were encountered: