Docs For AI
Advanced

CAP Theorem

CAP theorem and distributed system consistency models

CAP Theorem

The CAP theorem is a fundamental theory in distributed system design, stating that a distributed system cannot simultaneously guarantee consistency, availability, and partition tolerance.

CAP Definition

        Consistency
            /\
           /  \
          /    \
         /  CP  \
        /________\
       /\        /\
      /  \  CA  /  \
     / AP \    /    \
    /______\  /______\
Availability  Partition Tolerance

Consistency

All nodes see the same data at the same time.

Strong consistency:
┌──────────┐   Write x=1   ┌──────────┐
│  Node A  │──────────────▶│  Node B  │
│  x = 1   │               │  x = 1   │
└──────────┘               └──────────┘
     │                          │
     ▼                          ▼
  Read x                     Read x
  Returns 1                  Returns 1
  ✓ Consistent               ✓ Consistent

Availability

Every request receives a response, but not guaranteed to be the latest data.

Availability guarantee:
┌──────────┐              ┌──────────┐
│  Node A  │              │  Node B  │
│  Running │◀─Request─────│   Down   │
└──────────┘              └──────────┘


  Must respond
 (even if data may not be latest)

Partition Tolerance

System continues to operate when network partitions occur.

Network partition:
┌──────────┐    ╳╳╳╳╳     ┌──────────┐
│  Node A  │◀──Network────│  Node B  │
│          │   split      │          │
└──────────┘              └──────────┘
     │                         │
  Request                   Request
  Continue                  Continue
  processing                processing

CAP Trade-offs

When network partitions occur, you must choose between consistency and availability.

CP Systems (Consistency + Partition Tolerance)

Sacrifice availability to guarantee consistency.

┌──────────────────────────────────────────────────────────┐
│ During network partition:                                │
│ - Nodes that cannot sync refuse to serve                 │
│ - Prefer unavailability over inconsistent data           │
│                                                          │
│ Examples: MongoDB (default), HBase, Redis Cluster, etcd  │
│ Use cases: Financial systems, configuration centers      │
└──────────────────────────────────────────────────────────┘

Example scenario:
┌────────┐  ╳╳╳  ┌────────┐
│ Node A │       │ Node B │
│ x = 1  │       │ x = ?  │
└────────┘       └────────┘
    │                │
  Read x           Read x
  Returns 1        Returns error (refuses to serve)

AP Systems (Availability + Partition Tolerance)

Sacrifice strong consistency to guarantee availability.

┌──────────────────────────────────────────────────────────┐
│ During network partition:                                │
│ - All nodes continue responding to requests              │
│ - May return stale data                                  │
│ - Merge/reconcile data after network recovers            │
│                                                          │
│ Examples: Cassandra, DynamoDB, CouchDB                   │
│ Use cases: Social networks, shopping carts, CDN          │
└──────────────────────────────────────────────────────────┘

Example scenario:
┌────────┐  ╳╳╳  ┌────────┐
│ Node A │       │ Node B │
│ x = 1  │       │ x = 0  │
└────────┘       └────────┘
    │                │
  Read x           Read x
  Returns 1        Returns 0 (stale but available)

CA Systems (Consistency + Availability)

Theoretically possible, but distributed systems must handle network partitions.

┌──────────────────────────────────────────────────────────┐
│ Single-node databases are effectively CA systems:        │
│ - No network partition problem                           │
│ - Guarantees both consistency and availability           │
│                                                          │
│ Examples: Single-node PostgreSQL, single-node MySQL      │
│ Limitation: Cannot scale horizontally                    │
└──────────────────────────────────────────────────────────┘

Consistency Models

Strong Consistency

Reads immediately see the latest written value.

-- Write
UPDATE users SET balance = 100 WHERE id = 1;
-- Returns success

-- Read from any node
SELECT balance FROM users WHERE id = 1;
-- Always returns 100

Eventual Consistency

No guarantee of immediate consistency, but eventually converges.

Timeline:
T0: Node A writes x = 1
T1: Node A reads x → 1
    Node B reads x → 0  (not yet synced)
T2: Node B reads x → 0  (still syncing)
T3: Node B reads x → 1  (sync complete)

Eventual consistency window: T0 to T3

Causal Consistency

Causally related operations maintain order.

Operation sequence:
1. Alice posts "Hello"
2. Bob replies "Hi Alice"

Causal consistency guarantees:
- Anyone seeing Bob's reply will definitely see Alice's post
- Will never see reply without the original post

Read Your Writes

Users always see their own writes.

User A:
1. Updates profile
2. Refreshes page → sees new profile ✓

User B:
1. Views A's profile
2. May see old profile (eventually sees new one)

Monotonic Reads

Once a value is read, subsequent reads won't see older values.

User queries order status:
T1: Status = "Shipped"
T2: Status = "Shipped" or "Delivered"  ✓
    Status ≠ "Pending"  (won't regress)

PACELC Theorem

An extension of CAP that considers the latency-consistency trade-off during normal operation.

┌─────────────────────────────────────────────────────────┐
│ PACELC:                                                 │
│ if Partition → trade-off A vs C                         │
│ else → trade-off Latency vs Consistency                 │
│                                                         │
│ During partition: Availability vs Consistency           │
│ Normal operation: Latency vs Consistency                │
└─────────────────────────────────────────────────────────┘

System classification:
┌───────────────┬──────────────┬──────────────┐
│ System        │ Partitioned  │ Normal       │
├───────────────┼──────────────┼──────────────┤
│ DynamoDB      │ AP           │ EL           │
│ Cassandra     │ AP           │ EL           │
│ MongoDB       │ CP           │ EC           │
│ PNUTS         │ CP           │ EL           │
│ Spanner       │ CP           │ EC           │
└───────────────┴──────────────┴──────────────┘

EL = Else Latency (prioritize low latency)
EC = Else Consistency (prioritize consistency)

Consistency Level Configuration

Cassandra Tunable Consistency

-- Write consistency level
INSERT INTO users (id, name) VALUES (1, 'John')
USING CONSISTENCY QUORUM;

-- CONSISTENCY options:
-- ONE: Write to one replica
-- QUORUM: Write to majority ((RF/2)+1)
-- ALL: Write to all replicas
-- LOCAL_QUORUM: Majority in local datacenter

-- Read consistency level
SELECT * FROM users WHERE id = 1
CONSISTENCY QUORUM;

-- Strong consistency formula:
-- R + W > N
-- R: read replicas, W: write replicas, N: total replicas
-- Example: N=3, W=2, R=22+2>3 ✓ Strong consistency

MongoDB Write Concern

// Write Concern
db.users.insertOne(
  { name: "John" },
  {
    writeConcern: {
      w: "majority",     // Majority node acknowledgment
      j: true,           // Journal write
      wtimeout: 5000     // Timeout
    }
  }
);

// Read Concern
db.users.find().readConcern("majority");

// Read Preference
db.users.find().readPref("secondaryPreferred");

Choosing the Right Model

ScenarioRecommended ModelReason
Bank transfersStrong consistencyAmounts must be accurate
Shopping cartEventual consistencyBrief inconsistency acceptable
Inventory displayEventual consistencyApproximate values OK
Inventory deductionStrong consistencyPrevent overselling
Social likesEventual consistencyLow precision requirement
Config centerStrong consistencyAll nodes need same config

On this page