Primary node (Primary) and secondary node (Secondary)
In MongoDB, the primary node (Primary) and secondary nodes (Secondary) form the core architecture of a replica set (Replica Set), achieving high availability through data redundancy and automatic failover. The primary node handles all write operations, while secondary nodes maintain data synchronization through asynchronous replication and participate in electing a new primary node if the current one fails.
Roles and Functions of the Primary Node
The primary node is the only member in a replica set that accepts write operations. All client write requests are first processed by the primary node, which then records the operations in the oplog (operation log). For example, when executing the following insert operation:
// Connect to the primary node (default)
const db = connect("mongodb://primary.example.com:27017/mydb");
db.products.insertOne({
name: "MongoDB Book",
price: 49.99,
category: "Database"
});
The primary node will:
- Write the document to the in-memory storage engine.
- Record the operation in the oplog (
local.oplog.rs
collection). - Return a write acknowledgment to the client.
The primary node is also responsible for:
- Handling all read requests (unless read preferences are configured).
- Maintaining replica set state information.
- Coordinating heartbeat detection (default every 2 seconds).
Characteristics and Synchronization Mechanism of Secondary Nodes
Secondary nodes achieve data synchronization by continuously polling the primary node's oplog. A typical synchronization process includes:
- Initial sync: When a new node joins, it clones the complete data from the primary node.
- Continuous replication: Checks for oplog changes on the primary node every second.
- Applying operations: Replays operations from the oplog in chronological order.
You can check replication delay using the following command:
rs.printReplicationInfo()
// Example output:
// configured oplog size: 19200MB
// log length start to end: 7434secs (2.07hrs)
// oplog first event time: Thu Jan 01 2020 12:00:00 GMT+0800
// oplog last event time: Thu Jan 01 2020 14:07:54 GMT+0800
Secondary nodes provide the following key capabilities:
- Data redundancy: Maintain a complete copy of the data.
- Read scaling: Distribute query load by setting read preferences.
- Disaster recovery: Serve as a backup source.
Failover and Election Process
When the primary node becomes unreachable (default 10-second timeout), secondary nodes initiate an election. Election rules include:
- Priority: Nodes with higher
priority
values in the configuration file are preferred. - Data freshness: Nodes with the most recent oplog win.
- Voting rights: Nodes with
votes:1
participate in voting.
Example of an election process:
// Check node status
rs.status().members.forEach(m => {
console.log(`${m.name}: ${m.stateStr}, optime: ${m.optime.ts}`);
});
// Manually trigger an election (test environment)
rs.stepDown(60) // Primary node voluntarily steps down for 60 seconds
Read-Write Separation Practices
Query distribution can be achieved by setting read preferences:
// Read from a secondary node (may contain stale data)
const conn = Mongo(
"mongodb://replica.example.com:27017/?replicaSet=myRepl&readPreference=secondary"
);
// Prefer reading from the primary node, fall back to secondary if primary fails
const conn2 = Mongo(
"mongodb://replica.example.com:27017/?replicaSet=myRepl&readPreference=primaryPreferred"
);
Important considerations:
- Secondary node data may have delays (typically <1 second).
- Aggregation operations may require
$readPreference
hints. - Transactions must be routed to the primary node.
Configuration Examples and Monitoring
Example configuration for a typical three-node replica set:
// rs.initiate() configuration
{
_id: "myRepl",
members: [
{ _id: 0, host: "mongo1:27017", priority: 3 },
{ _id: 1, host: "mongo2:27017", priority: 2 },
{ _id: 2, host: "mongo3:27017", priority: 1, arbiterOnly: true }
],
settings: {
heartbeatIntervalMillis: 2000,
electionTimeoutMillis: 10000
}
}
Key monitoring metrics:
- Replication delay (
rs.printSlaveReplicationInfo()
). - Node status (
rs.status()
). - Oplog window time (
db.getReplicationInfo()
).
Special Node Types
In addition to standard secondary nodes, there are:
Hidden Nodes:
{ _id: 3, host: "mongo4:27017", priority: 0, hidden: true }
Purpose: Dedicated backup or reporting nodes, invisible to clients.
Delayed Nodes:
{ _id: 4, host: "mongo5:27017", priority: 0, slaveDelay: 3600 }
Purpose: Protect against human errors by providing a snapshot of data from 1 hour ago.
Arbiter Nodes:
{ _id: 5, host: "mongo6:27017", arbiterOnly: true }
Purpose: Participate only in voting and do not store data.
Data Consistency Considerations
MongoDB provides multiple write concern levels:
// Wait for data to replicate to 1 secondary node
db.orders.insertOne(
{ item: "laptop", qty: 1 },
{ writeConcern: { w: 2 } }
);
// Timeout setting
db.products.updateOne(
{ sku: "XYZ123" },
{ $inc: { stock: -1 } },
{ writeConcern: { w: "majority", wtimeout: 5000 } }
);
Consistency trade-offs:
w:1
: Default, only the primary node acknowledges.w:"majority"
: Ensures data durability.wtimeout
: Avoids long blocking periods.
Performance Optimization Strategies
Optimization recommendations for primary-secondary architecture:
- Oplog size adjustment:
// Specify oplog size at startup (MB)
mongod --replSet myRepl --oplogSize 20480
- Indexing strategy:
- Secondary nodes can add different indexes to meet specific query needs.
- Hidden indexes avoid impacting primary node performance.
- Network optimization:
- Dedicated heartbeat network (
settings.heartbeatMultiplier
). - Compressed replication traffic (
networkMessageCompressors
).
Troubleshooting Common Issues
Common problem resolution patterns:
High Replication Delay:
- Check secondary node load (
db.currentOp()
). - Verify network latency (
ping
/traceroute
). - Assess whether oplog size is sufficient.
Frequent Elections:
// Modify election timeout (requires restarting all nodes)
cfg = rs.conf()
cfg.settings.electionTimeoutMillis = 20000
rs.reconfig(cfg)
Primary Node Lag:
- Check if the write concern level is too high.
- Analyze lock contention (
db.serverStatus().locks
). - Verify storage engine performance.
本站部分内容来自互联网,一切版权均归源网站或源作者所有。
如果侵犯了你的权益请来信告知我们删除。邮箱:cc@cccx.cn
下一篇:选举机制与故障转移