MongoDB 副本集的原理、搭建、应用

2016年12月16日

概念:

在了解了这篇文章之后,可以进行该篇文章的说明和测试。MongoDB 副本集(Replica Set)是有自动故障恢复功能的主从集群,有一个Primary节点和一个或多个Secondary节点组成。类似于MySQL的mongo_zzz架构。更多关于副本集的介绍请见官网。也可以在google、baidu上查阅。

副本集中数据同步过程:Primary节点写入数据,Secondary通过读取Primary的oplog得到复制信息,开始复制数据并且将复制信息写入到自己的oplog。如果某个操作失败,则备份节点停止从当前数据源复制数据。如果某个备份节点由于某些原因挂掉了,当重新启动后,就会自动从oplog的最后一个操作开始同步,同步完成后,将信息写入自己的oplog,由于复制操作是先复制数据,复制完成后再写入oplog,有可能相同的操作会同步两份,不过MongoDB在设计之初就考虑到这个问题,将oplog的同一个操作执行多次,与执行一次的效果是一样的。简单的说就是:

当Primary节点完成数据操作后,Secondary会做出一系列的动作保证数据的同步:
1:检查自己local库的oplog.rs集合找出最近的时间戳。
2:检查Primary节点local库oplog.rs集合,找出大于此时间戳的记录。
3:将找到的记录插入到自己的oplog.rs集合中,并执行这些操作。

副本集的同步和主从同步一样,都是异步同步的过程,不同的是副本集有个自动故障转移的功能。其原理是:slave端从primary端获取日志,然后在自己身上完全顺序的执行日志所记录的各种操作(该日志是不记录查询操作的),这个日志就是local数据 库中的oplog.rs表,默认在64位机器上这个表是比较大的,占磁盘大小的5%,oplog.rs的大小可以在启动参数中设 定:--oplogSize 1000,单位是M。

注意:在副本集的环境中,要是所有的Secondary都宕机了,只剩下Primary。最后Primary会变成Secondary,不能提供服务。

一:环境搭建
1:准备服务器

192.168.11.212  仲裁 ARBITER
192.168.11.213  复制 SECONDARY
192.168.11.217  主   PRIMARY
192.168.11.218  复制 SECONDARY
192.168.11.219  复制 SECONDARY

STARTUP:刚加入到复制集中,配置还未加载
STARTUP2:配置已加载完,初始化状态
RECOVERING:正在恢复,不适用读
ARBITER: 仲裁者
DOWN:节点不可到达
UNKNOWN:未获取其他节点状态而不知是什么状态,一般发生在只有两个成员的架构,脑裂
REMOVED:移除复制集
ROLLBACK:数据回滚,在回滚结束时,转移到RECOVERING或SECONDARY状态
FATAL:出错。查看日志grep “replSet FATAL”找出错原因,重新做同步
PRIMARY:主节点
SECONDARY:备份节点

2:安装

How to Install MongoDB 3.2 on Ubuntu 16.04, 14.04, 12.04 and Debian 8/7

3:修改配置,只需要开启:replSet 参数即可。列出一份测试是

# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# Where and how to store data.
#processManagement:
#    fork: true
#    pidFilePath: /var/run/mongodb/27017.pid

storage:
dbPath: /var/lib/mongodb
journal:
enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log

# network interfaces
net:
port: 27017
bindIp: 192.168.11.217

replication:
oplogSizeMB: 1024
replSetName: "mongo_zzz"
secondaryIndexPrefetch: "all"

4:启动

service mongod start

5:初始化副本集

登入任意一台机器的MongoDB执行:因为是全新的副本集所以可以任意进入一台执行;要是有一台有数据,则需要在有数据上执行;要多台有数据则不能初始化。
这里我选择的是  192.168.11.217

Primary> cfg = {"_id" : "[replication_set_name]", "members" : [{"_id" : 0,"host" : "[Primary_Host_IP]:27017"}]}
Primary> rs.initiate(cfg);

在复制集中添加Failover实例

Primary> rs.add("[Failover_Host_IP]:27017")

在复制集中添加仲裁实例

Primary>rs.addArb(":27017")

确认复制集的状态

Primary>rs.status()

root@baron-VirtualBox:~$ mongo
MongoDB shell version: 3.2.10
connecting to: test
### 批量增
> rs.initiate({"_id":"mongo_zzz","members":[
... {"_id":1,
... "host":"192.168.11.213:27017",
... "priority":1
... },
... {"_id":2,
... "host":"192.168.11.218:27017",
... "priority":1
... }
... ]})

或者单个添加
初始化
rs.initiate()

rs.add("hostIP:Port");

添加
rs.add("192.168.11.213:27017")
rs.add("192.168.11.218:27017")

由于我是完成后记录的所以打印出来的信息有点多

######
"_id": 副本集的名称
"members": 副本集的服务器列表
"_id": 服务器的唯一ID
"host": 服务器主机
"priority": 是优先级,默认为1,优先级0为被动节点,不能成为活跃节点。优先级不位0则按照有大到小选出活跃节点。
"arbiterOnly": 仲裁节点,只参与投票,不接收数据,也不能成为活跃节点。

> rs.status()
{
"set" : "mongo_zzz",
"date" : ISODate("2016-12-13T06:22:34.477Z"),
"myState" : 2,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 1,
"name" : "192.168.11.213:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1606,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T06:22:30.344Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T06:22:33.332Z"),
"pingMs" : NumberLong(1),
"syncingTo" : "192.168.11.217:27017",
"configVersion" : 13
},
{
"_id" : 2,
"name" : "192.168.11.212:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 1606,
"lastHeartbeat" : ISODate("2016-12-13T06:22:30.131Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T06:22:31.374Z"),
"pingMs" : NumberLong(2),
"configVersion" : 13
},
{
"_id" : 3,
"name" : "192.168.11.217:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1606,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T06:22:30.337Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T06:22:33.749Z"),
"pingMs" : NumberLong(2),
"electionTime" : Timestamp(1481595073, 1),
"electionDate" : ISODate("2016-12-13T02:11:13Z"),
"configVersion" : 13
},
{
"_id" : 4,
"name" : "192.168.11.218:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1606,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T06:22:30.306Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T06:22:33.433Z"),
"pingMs" : NumberLong(1),
"syncingTo" : "192.168.11.217:27017",
"configVersion" : 13
},
],
"ok" : 1
}

6:日志

查看252上的日志:

Tue Feb 18 12:03:29.334 [rsMgr] replSet PRIMARY
…………
…………
Tue Feb 18 12:03:40.341 [rsHealthPoll] replSet member 192.168.11.213:27017 is now in state SECONDARY

至此,整个副本集已经搭建成功了。

上面的的副本集只有2台服务器,还有一台怎么添加?除了在初始化的时候添加,还有什么方法可以后期增删节点?

二:维护操作

1:增删节点。

把212 服务加入到副本集中:

rs.add("192.168.11.212:27017")

展示结果类似如下:
mongo_zzz:PRIMARY> rs.add("192.168.11.212:27017")
{ "ok" : 1 }
mongo_zzz:PRIMARY> rs.status()

{
"set" : "mongo_zzz",
"date" : ISODate("2014-02-18T04:53:00Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "192.168.200.252:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3023,
"optime" : Timestamp(1392699177, 1),
"optimeDate" : ISODate("2014-02-18T04:52:57Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.200.245:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2982,
"optime" : Timestamp(1392699177, 1),
"optimeDate" : ISODate("2014-02-18T04:52:57Z"),
"lastHeartbeat" : ISODate("2014-02-18T04:52:59Z"),
"lastHeartbeatRecv" : ISODate("2014-02-18T04:53:00Z"),
"pingMs" : 0,
"syncingTo" : "192.168.200.252:27017"
},
{
"_id" : 3,
"name" : "192.168.200.25:27017",
"health" : 1,
"state" : 6,
"stateStr" : "UNKNOWN",             #等一会就变成了 SECONDARY
"uptime" : 3,
"optime" : Timestamp(0, 0),
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2014-02-18T04:52:59Z"),
"lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "still initializing"
}
],
"ok" : 1
}

把212服务从副本集中删除:

rs.remove("192.168.11.212:27017")

mongo_zzz:PRIMARY> rs.remove("192.168.11.212:27017")
Tue Feb 18 13:01:09.298 DBClientCursor::init call() failed
Tue Feb 18 13:01:09.299 Error: error doing query: failed at src/mongo/shell/query.js:78
Tue Feb 18 13:01:09.300 trying reconnect to 192.168.11.212:27017
Tue Feb 18 13:01:09.301 reconnect 192.168.11.212:27017 ok
mongo_zzz:PRIMARY> rs.status()
{
"set" : "mongo_zzz",
"date" : ISODate("2014-02-18T05:01:19Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "192.168.11.217:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3522,
"optime" : Timestamp(1392699669, 1),
"optimeDate" : ISODate("2014-02-18T05:01:09Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.11.218:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 10,
"optime" : Timestamp(1392699669, 1),
"optimeDate" : ISODate("2014-02-18T05:01:09Z"),
"lastHeartbeat" : ISODate("2014-02-18T05:01:19Z"),
"lastHeartbeatRecv" : ISODate("2014-02-18T05:01:18Z"),
"pingMs" : 0,
"lastHeartbeatMessage" : "syncing to: 192.168.200.252:27017",
"syncingTo" : "192.168.200.252:27017"
}
],
"ok" : 1
}

192.168.200.25 的节点已经被移除。

2:查看复制的情况

db.printSlaveReplicationInfo()

mongo_zzz:PRIMARY> db.printSlaveReplicationInfo()
source: 192.168.11.219:27017
syncedTo: Tue Dec 13 2016 11:54:51 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.11.213:27017
syncedTo: Tue Dec 13 2016 11:54:51 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.11.218:27017
syncedTo: Tue Dec 13 2016 11:54:51 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.11.74:27018
syncedTo: Tue Dec 13 2016 11:54:51 GMT+0800 (CST)
0 secs (0 hrs) behind the primary
source: 192.168.11.74:27017
syncedTo: Tue Dec 13 2016 11:54:51 GMT+0800 (CST)
0 secs (0 hrs) behind the primary

source:从库的ip和端口。

syncedTo:目前的同步情况,以及最后一次同步的时间。

从上面可以看出,在数据库内容不变的情况下他是不同步的,数据库变动就会马上同步。

3:查看副本集的状态

rs.status()

mongo_zzz:PRIMARY> rs.status()
{
"set" : "mongo_zzz",
"date" : ISODate("2014-02-18T05:01:19Z"),
"myState" : 1,
"members" : [
{
"_id" : 1,
"name" : "192.168.11.217:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 3522,
"optime" : Timestamp(1392699669, 1),
"optimeDate" : ISODate("2014-02-18T05:01:09Z"),
"self" : true
},
......
......

4:副本集的配置

rs.conf()/rs.config()

mongo_zzz:PRIMARY> rs.conf()
{
"_id" : "mongo_zzz",
"version" : 13,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "192.168.11.219:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.11.213:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.11.212:27017",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 3,
"host" : "192.168.11.217:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 5,
"host" : "192.168.11.218:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 6,
"host" : "192.168.11.74:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 7,
"host" : "192.168.11.74:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("584e6e8d1cd5eddcc6a2ccb4")
}
}

5:操作Secondary

默认情况下,Secondary是不提供服务的,即不能读和写。会提示:
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }

在特殊情况下需要读的话则需要:
rs.slaveOk() ,只对当前连接有效。

mongo_zzz:SECONDARY> db.test.find()
error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
mongo_zzz:SECONDARY> rs.slaveOk()
mongo_zzz:SECONDARY> db.test.find()
{ "_id" : ObjectId("5302edfa8c9151a5013b978e"), "a" : 1 }

三:测试

1:测试副本集数据复制功能

在Primary(192.168.11.217:27017)上插入数据:

mongo_zzz:PRIMARY> use baron
mongo_zzz:PRIMARY> for(var i=0;i<10000;i++){db.baron.insert({"name":"test"+i,"age":123})}
mongo_zzz:PRIMARY> db.baron.count()
10001

在Secondary上查看是否已经同步:

mongo_zzz:SECONDARY> rs.slaveOk()
mongo_zzz:SECONDARY> db.test.count()
10001

数据已经同步。

2:测试副本集故障转移功能

关闭Primary节点,查看其他节点的情况:
mongo_zzz:PRIMARY>rs.status()
输出结果省略 ...

#关闭
mongo_zzz:PRIMARY> use admin
switched to db admin
mongo_zzz:PRIMARY> db.shutdownServer()

#进入任意一台:
mongo_zzz:SECONDARY> rs.status()
输出结果省略 ...

可以发现 原来的主 stateStr 变成了not reachable/healthy

"stateStr" : "(not reachable/healthy)",

其中一台 SECONDARY 成为了 PRIMARY 主节点

在新主上插入:

mongo_zzz:PRIMARY> for(var i=0;i<10000;i++){db.baron.insert({"name":"test"+i,"age":123})}
mongo_zzz:PRIMARY> db.baron.count()
20001

重启启动之前关闭的192.168.11.217:27017

service mongod restart

mongo_zzz:SECONDARY> rs.status()
输出结果省略 ...

启动之前的主,发现其变成了SECONDARY,在新主插入的数据,是否已经同步:

mongo_zzz:SECONDARY> db.test.count()
Tue Feb 18 13:47:03.634 count failed: { "note" : "from execCommand", "ok" : 0, "errmsg" : "not master" } at src/mongo/shell/query.js:180
mongo_zzz:SECONDARY> rs.slaveOk()
mongo_zzz:SECONDARY> db.baron.count()
20001

已经同步。

所有的Secondary都宕机、或则副本集中只剩下一个节点,则该节点只能为Secondary节点,也就意味着整个集群智能进行读操作而不能进行写操作,当其他的恢复时,之前的primary节点仍然是primary节点。

当某个节点宕机后重新启动该节点会有一段的时间(时间长短视集群的数据量和宕机时间而定)导致整个集群中所有节点都成为secondary而无法进行写操作(如果应用程序没有设置相应的ReadReference也可能不能进行读取操作)。

官方推荐的最小的副本集也应该具备一个primary节点和两个secondary节点。两个节点的副本集不具备真正的故障转移能力。

四:应用

1:手动切换Primary节点到自己给定的节点
上面已经提到过了优先集priority,因为默认的都是1,所以只需要把给定的服务器的priority加到最大即可。让218 成为主节点,操作如下:

mongo_zzz:PRIMARY> rs.conf()
{
"_id" : "mongo_zzz",
"version" : 13,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "192.168.11.219:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.11.213:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.11.212:27017",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 3,
"host" : "192.168.11.217:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 5,
"host" : "192.168.11.218:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 6,
"host" : "192.168.11.74:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 7,
"host" : "192.168.11.74:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("584e6e8d1cd5eddcc6a2ccb4")
}
}

mongo_zzz:PRIMARY> rs.status()
{
"set" : "mongo_zzz",
"date" : ISODate("2016-12-13T07:16:58.418Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"members" : [
{
"_id" : 0,
"name" : "192.168.11.219:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 18254,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T07:16:56.602Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:57.928Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.11.217:27017",
"configVersion" : 13
},
{
"_id" : 1,
"name" : "192.168.11.213:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 76682,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T07:16:56.517Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:58.256Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.11.217:27017",
"configVersion" : 13
},
{
"_id" : 2,
"name" : "192.168.11.212:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 76682,
"lastHeartbeat" : ISODate("2016-12-13T07:16:56.714Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:57.597Z"),
"pingMs" : NumberLong(0),
"configVersion" : 13
},
{
"_id" : 3,
"name" : "192.168.11.217:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 76700,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"electionTime" : Timestamp(1481595073, 1),
"electionDate" : ISODate("2016-12-13T02:11:13Z"),
"configVersion" : 13,
"self" : true
},
{
"_id" : 5,
"name" : "192.168.11.218:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 12631,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T07:16:56.602Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:58.097Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.11.217:27017",
"configVersion" : 13
},
{
"_id" : 6,
"name" : "192.168.11.74:27018",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 4724,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T07:16:57.699Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:57.178Z"),
"pingMs" : NumberLong(1),
"configVersion" : 13
},
{
"_id" : 7,
"name" : "192.168.11.74:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 12127,
"optime" : {
"ts" : Timestamp(1481601291, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2016-12-13T03:54:51Z"),
"lastHeartbeat" : ISODate("2016-12-13T07:16:57.687Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:54.121Z"),
"pingMs" : NumberLong(1),
"configVersion" : 13
}
],
"ok" : 1
}

mongo_zzz:PRIMARY> cfg=rs.conf()
{
"_id" : "mongo_zzz",
"version" : 13,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "192.168.11.219:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.11.213:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.11.212:27017",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 3,
"host" : "192.168.11.217:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 5,
"host" : "192.168.11.218:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 6,
"host" : "192.168.11.74:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 7,
"host" : "192.168.11.74:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {

},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("584e6e8d1cd5eddcc6a2ccb4")
}
}

mongo_zzz:PRIMARY> cfg.members[1].priority=5  #修改priority
5
mongo_zzz:PRIMARY> rs.reconfig(cfg) #重新加载配置文件,强制了副本集进行一次选举,优先级高的成为Primary。在这之间整个集群的所有节点都是secondary

2:添加仲裁节点
仲裁者(Arbiter)是复制集中的一个MongoDB实例,它并不保存数据。仲裁节点使用最小的资源并且不要求硬件设备,不能将Arbiter部署在同一个数据集节点中,可以部署在其他应用服务器或者监视服务器中,也可部署在单独的虚拟机中。为了确保复制集中有奇数的投票成员(包括primary),需要添加仲裁节点做为投票,否则primary不能运行时不会自动切换primary。

仲裁节点 只有一个可先测试副本集 最后删除节点在添加仲裁节点

把212节点删除,重启。再添加让其为仲裁节点:

删除节点
rs.remove("192.168.200.25:27017")
添加仲裁节点
rs.addArb("192.168.11.212:27017")

{
"_id" : 2,
"name" : "192.168.11.212:27017",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 76682,
"lastHeartbeat" : ISODate("2016-12-13T07:16:56.714Z"),
"lastHeartbeatRecv" : ISODate("2016-12-13T07:16:57.597Z"),
"pingMs" : NumberLong(0),
"configVersion" : 13
}

mongo_zzz:SECONDARY> rs.conf()
{
"_id" : "mongo_zzz",
"version" : 15,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "192.168.11.219:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "192.168.11.213:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 5,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "192.168.11.212:27017",
"arbiterOnly" : true,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 3,
"host" : "192.168.11.217:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
},

"arbiterOnly" : true,

上面说明已经让212服务器成为仲裁节点。副本集要求参与选举投票(vote)的节点数为奇数,当我们实际环境中因为机器等原因限制只有两个(或偶数)的节点,这时为了实现 Automatic Failover引入另一类节点:仲裁者(arbiter),仲裁者只参与投票不拥有实际的数据,并且不提供任何服务,因此它对物理资源要求不严格。

通过实际测试发现,当整个副本集集群中达到50%的节点(包括仲裁节点)不可用的时候,剩下的节点只能成为secondary节点,整个集群只能读不能 写。比如集群中有1个primary节点,2个secondary节点,加1个arbit节点时:当两个secondary节点挂掉了,那么剩下的原来的 primary节点也只能降级为secondary节点;当集群中有1个primary节点,1个secondary节点和1个arbit节点,这时即使 primary节点挂了,剩下的secondary节点也会自动成为primary节点。因为仲裁节点不复制数据,因此利用仲裁节点可以实现最少的机器开 销达到两个节点热备的效果。

3:添加备份节点

hidden(成员用于支持专用功能):这样设置后此机器在读写中都不可见,并且不会被选举为Primary,但是可以投票,一般用于备份数据。

添加hidden节点:

mongo_zzz:PRIMARY> rs.remove("192.168.11.219:27017")
mongo_zzz:PRIMARY> rs.add({"host":"192.168.11.219:27017","priority":0,"hidden":true})

mongo_zzz:PRIMARY> rs.conf()

{
"_id" : 8,
"host" : "192.168.11.219:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : true,
"priority" : 0,
"tags" : {

},
"slaveDelay" : NumberLong(0),
"votes" : 1
}

4:添加延迟节点

Delayed(成员用于支持专用功能):可以指定一个时间延迟从primary节点同步数据。主要用于处理误删除数据马上同步到从节点导致的不一致问题。

把25节点删除,重启。再添加让其为Delayed节点:
复制代码

mongo_zzz:PRIMARY> rs.add({"host":"192.168.11.219:27017","priority":0,"hidden":true,"slaveDelay":60})  #语法

没有评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注