You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
so without thinking i span up a dev mongo with the same replSet name (but in a different kubernetes namespace). this, correctly (but unfortunately) started a real replica of my 'production' data.
i have removed the dev instance successfully from the replSet in mongo itself (as per [https://docs.mongodb.com/manual/tutorial/remove-replica-set-member/](mongo's docs)) however, i still see the following from the mongo logs:
2018-06-07T01:31:56.509+0000 I NETWORK [conn159306] received client metadata from 127.0.0.1:37850 conn159306: { driver: { name: "nodejs", version: "2.2.35" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "3.10.0-693.21.1.el7.x86_64" }, platform: "Node.js v9.8.0, LE, mongodb-core: 2.1.19" }
2018-06-07T01:31:56.513+0000 I REPL [conn159306] replSetReconfig admin command received from client; new config: { _id: "rs0", version: 198323, protocolVersion: 1, members: [ { _id: 0, host: "mongo-0.mongo.cryoem-logbook.svc.cluster.local:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017" } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5b0faa2b5af708909540cb1d') } }
2018-06-07T01:31:56.516+0000 I REPL [conn159306] replSetReconfig config object with 2 members parses ok
2018-06-07T01:31:56.517+0000 W REPL [replexec-19] Our set name did not match that of mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017
2018-06-07T01:31:56.517+0000 E REPL [conn159306] replSetReconfig failed; NewReplicaSetConfigurationIncompatible: Our set name did not match that of mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017
and the logs from the sidecar as such:
Addresses to add: [ 'mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017' ]
Addresses to remove: []
replSetReconfig { _id: 'rs0',
version: 198322,
protocolVersion: 1,
members:
[ { _id: 0,
host: 'mongo-0.mongo.cryoem-logbook.svc.cluster.local:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
slaveDelay: 0,
votes: 1 },
{ _id: 1,
host: 'mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017' } ],
settings:
{ chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: 5b0faa2b5af708909540cb1d } }
Error in workloop { MongoError: Our set name did not match that of mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017
at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
at Socket.emit (events.js:180:13)
at addChunk (_stream_readable.js:269:12)
at readableAddChunk (_stream_readable.js:256:11)
at Socket.Readable.push (_stream_readable.js:213:10)
at TCP.onread (net.js:578:20)
name: 'MongoError',
message: 'Our set name did not match that of mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017',
ok: 0,
errmsg: 'Our set name did not match that of mongo-0.mongo.cryoem-logbook-dev.svc.cluster.local:27017',
code: 103,
codeName: 'NewReplicaSetConfigurationIncompatible',
operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1528335416 },
'$clusterTime':
{ clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1528335416 },
signature: { hash: [Binary], keyId: 0 } } }
How can i make my production mongo forget about the -dev replica? both rs.conf() and rs.status() show the correct config (without the dev instance).
The text was updated successfully, but these errors were encountered:
so without thinking i span up a dev mongo with the same replSet name (but in a different kubernetes namespace). this, correctly (but unfortunately) started a real replica of my 'production' data.
i have removed the dev instance successfully from the replSet in mongo itself (as per [https://docs.mongodb.com/manual/tutorial/remove-replica-set-member/](mongo's docs)) however, i still see the following from the mongo logs:
and the logs from the sidecar as such:
How can i make my production mongo forget about the -dev replica? both
rs.conf()
andrs.status()
show the correct config (without the dev instance).The text was updated successfully, but these errors were encountered: