阿里云主机折上折
  • 微信号
Current Site:Index > Data redundancy and high availability design

Data redundancy and high availability design

Author:Chuan Chen 阅读数:41884人阅读 分类: MongoDB

Basic Concepts of Data Redundancy

Data redundancy refers to the phenomenon where the same data is stored in multiple locations within a database system. In MongoDB, data redundancy can bring performance advantages but may also lead to data consistency issues. Designing a reasonable redundancy strategy is key to building a highly available system.

MongoDB achieves data redundancy through replica sets. Each replica set consists of multiple data nodes, one of which is the primary node (Primary), and the rest are secondary nodes (Secondaries). When the primary node becomes unavailable, the system automatically elects a new primary node.

// Example of connecting to a MongoDB replica set
const { MongoClient } = require('mongodb');
const uri = "mongodb://node1.example.com:27017,node2.example.com:27017,node3.example.com:27017/?replicaSet=myReplicaSet";
const client = new MongoClient(uri);

How Replica Sets Work

MongoDB replica sets use an asynchronous replication mechanism. The primary node receives all write operations and then propagates the operation records (Oplog) to secondary nodes. This design ensures high availability but also introduces eventual consistency characteristics.

A typical replica set configuration includes:

  • 1 primary node (responsible for all write operations)
  • 2 or more secondary nodes (providing read scaling and failover)
  • Optional arbiter nodes (do not store data, only participate in elections)
// Checking replica set status
const adminDb = client.db('admin');
const status = await adminDb.command({ replSetGetStatus: 1 });
console.log(status.members.map(m => `${m.name} (${m.stateStr})`));

Read/Write Concerns and Data Consistency

MongoDB provides flexible read/write concern levels, allowing developers to balance performance and data consistency.

Examples of write concern levels:

  • w: 1 (default): Acknowledges writes to the primary node
  • w: "majority": Acknowledges writes to a majority of nodes
  • w: 3: Acknowledges writes to 3 nodes
// Example of using write concern
await collection.insertOne(
  { name: "Important Data" },
  { writeConcern: { w: "majority", j: true } }
);

Examples of read concern levels:

  • "local": Reads data from the local node (may not be the latest)
  • "available": Similar to "local" but with special behavior in sharded clusters
  • "majority": Reads data confirmed to be written to a majority of nodes

Redundancy Design in Sharded Clusters

For large-scale datasets, MongoDB uses sharding for horizontal scaling. Each shard is itself a replica set, achieving dual redundancy:

  1. Shard level: Data is distributed across multiple shards
  2. Replica level: Each shard internally maintains complete data copies
// Example connection string for a sharded cluster
const shardedUri = "mongodb://mongos1.example.com:27017,mongos2.example.com:27017/";

Failover and Automatic Recovery

MongoDB's high availability is primarily reflected in its automatic failover capability. When the primary node fails, the replica set triggers an election process:

  1. Secondary nodes detect the primary node is unreachable
  2. Eligible secondary nodes initiate an election
  3. The node receiving majority votes becomes the new primary
  4. Client drivers automatically reconnect to the new primary

Elections typically complete within seconds, during which the cluster is unwritable but readable (depending on read concern settings).

Data Synchronization and Oplog

The Oplog (operation log) is the core of MongoDB's replication mechanism. It's a capped collection that records all data-modifying operations. Secondary nodes stay synchronized with the primary by replaying the Oplog.

Example Oplog entry:

{
  "ts": Timestamp(1620000000, 1),
  "h": NumberLong("1234567890"),
  "v": 2,
  "op": "i",
  "ns": "test.users",
  "o": { "_id": ObjectId("..."), "name": "Alice" }
}

Applications of Delayed Secondaries

In certain scenarios, delayed secondaries can be configured as "time machines":

  • Preventing human errors (recovering deleted data from delayed nodes)
  • Handling logical corruption (e.g., incorrect batch updates)
  • Typically configured with 1-hour or longer delays
// Configuring a delayed secondary (requires replica set configuration)
cfg.members[2].priority = 0;
cfg.members[2].hidden = true;
cfg.members[2].slaveDelay = 3600;
await adminDb.command({ replSetReconfig: cfg });

Network Partitions and Split-Brain Issues

In network partition scenarios, MongoDB avoids split-brain through majority principle:

  • Only partitions with majority votes can elect a primary
  • Nodes in minority partitions are demoted to secondaries
  • Clients can only connect to the primary partition

This design ensures that even during network partitions, only one partition can accept writes, preventing data divergence.

Client Connection Handling

MongoDB drivers implement intelligent connection management:

  1. Automatic detection of primary node changes
  2. Retrying write operations during failover
  3. Routing read requests based on read preference
  4. Maintaining connection pools for better performance
// Example of setting read preference
const collection = client.db('test').collection('users', {
  readPreference: 'secondaryPreferred'
});

Monitoring and Alerting Strategies

Effective monitoring is essential for highly available systems. Key metrics include:

  • Replication lag (seconds behind master)
  • Node status (primary/secondary/arbiter)
  • Oplog window time (recovery time window)
  • Election counts (frequent elections may indicate issues)
// Getting replication lag information
const replStatus = await adminDb.command({ replSetGetStatus: 1 });
const primaryOptime = replStatus.members.find(m => m.state === 1).optime.ts;
const secondaryOptime = replStatus.members.find(m => m.state === 2).optime.ts;
const lag = primaryOptime - secondaryOptime;

Backup and Recovery Strategies

Even with replica sets, regular backups are necessary:

  1. Logical backups (mongodump/mongorestore)
  2. Physical backups (filesystem snapshots)
  3. Incremental backups (based on Oplog)
  4. Cloud service-provided automatic backups
// Using mongodump for backups (command-line tool)
// mongodump --host=myreplicaSet/node1.example.com:27017 --archive=backup.archive

Multi-Datacenter Deployment

For critical business systems, cross-datacenter deployment enhances disaster recovery:

  • Distribute replica set nodes across different data centers
  • Configure appropriate writeConcern for data durability
  • Consider network latency's impact on consistency
  • Use tag sets to control data distribution
// Example tag set configuration
cfg.members[0].tags = { "dc": "east", "rack": "rack1" };
cfg.members[1].tags = { "dc": "east", "rack": "rack2" };
cfg.members[2].tags = { "dc": "west", "rack": "rack1" };

Balancing Performance and Redundancy

Designing redundancy requires trade-offs:

  • Increasing node count improves availability but adds synchronization overhead
  • Strict write concerns ensure safety but reduce write performance
  • Geographic distribution enhances disaster recovery but increases latency
  • Monitoring metrics help find the optimal balance

Application-Level Fault Tolerance

Beyond database-level redundancy, applications should consider:

  1. Retry mechanisms (for transient errors)
  2. Local caching (reducing database dependency)
  3. Fallback solutions (when database is unavailable)
  4. Graceful degradation (maintaining basic functionality)
// Simple retry mechanism implementation
async function withRetry(operation, maxRetries = 3) {
  let lastError;
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await operation();
    } catch (err) {
      lastError = err;
      if (!err.writeError || !err.writeError.code === 'PrimarySteppedDown') {
        break;
      }
      await new Promise(resolve => setTimeout(resolve, 100 * Math.pow(2, i)));
    }
  }
  throw lastError;
}

本站部分内容来自互联网,一切版权均归源网站或源作者所有。

如果侵犯了你的权益请来信告知我们删除。邮箱:cc@cccx.cn

Front End Chuan

Front End Chuan, Chen Chuan's Code Teahouse 🍵, specializing in exorcising all kinds of stubborn bugs 💻. Daily serving baldness-warning-level development insights 🛠️, with a bonus of one-liners that'll make you laugh for ten years 🐟. Occasionally drops pixel-perfect romance brewed in a coffee cup ☕.