Skip to main content

Block Production Explained

Overview

Block production involves complex timing mechanisms to ensure efficient and reliable blockchain operations. This guide explains how block producers coordinate to maintain network consensus, with detailed examples showing different network scenarios.

Key Concepts and Variables

Understanding block production requires familiarity with several key variables and timing mechanisms:

VariableValueDescription
r12Producer Repetitions - Number of blocks each producer creates in their turn
mvariesMax Block CPU Usage - Maximum CPU time allowed per block (consensus parameter)
uvariesMax Block Net Usage - Maximum network bandwidth per block (consensus parameter)
tvariesBlock Time - When a block should be finalized
econfigProduce Block Offset - Configuration setting in nodeop
w500msBlock Time Interval - Fixed time between blocks
ae/r msEarly Release Time(ms) - Block release offset
lt-aProduce Block Time - When block production actually starts
pw-aProduction Window - Available time to produce a block
cmin(m, w-a)Billed CPU Time - Actual CPU time charged for the block
nvariesNetwork Latency - Time for data to travel between nodes
hvariesBlock Header Validation Time - Time to validate block headers

Network Topology

Block producers operate in a network where timing is critical. Let's consider the example of the following two BPs and their network topology as depicted in the below diagram

How Block Rounds Work

Each block producer creates 12 consecutive blocks during their turn at a fixed interval of 500ms:

BP-A will send block at l and, BP-B needs block at time t or otherwise will drop it.

If BP-A’s schedule is:

b₁ @ t1, b₂ @ t1.5, b₃ @ t2, b₄ @ t2.5, … up to b₁₂ @ t6.5, then BP-B must receive b₁₂ no later than t 6.5. That leaves BP-B the remaining 0.5 s of the slot to begin producing its own first block (b₁₃ @ t7).

The last block time of BP-A (t 6.5) is therefore the hand-off moment: BP-A finishes, BP-B must immediately start.

A block is produced and sent when either it reaches m or u or p.

Blocks are propagated after block header validation. This means instead of BP-A Peer & BP-B Peer taking m time to validate and forward a block it only takes a small number of milliseconds to verify the block header and then forward the block.

Block Timing in Wire Sysio v5.0

Blocks start immediately after the previous block completes, with all sleep moved to the end of the round

Examples

Context

Each example analyzes one full production round — 12 blocks created by a single producer ( r = 12 ) and block interval is w = 500ms.
The question is always the same:

Will the last block of BP-A’s round reach BP-B in time for BP-B to begin its own production?

To answer that, every scenario plugs its specific numbers into the same core equation:

Δi=Cia  \boxed{\Delta_i = C - i\,a} \;
  • Δᵢ is the arrival offset for block i
  • C is the fixed travel cost between BP-A and BP-B(C = 2h + n -> the total time a block takes to travel from BP-A to BP-B equals 2 header-validation steps—one at each peer(2 h) plus any network latency (n))
  • a (a = e / r) is the fixed amount of time by which each block is released ahead of its scheduled slot.

Interpretation

  • Δᵢ > 0 → block i arrives late by Δᵢ.
  • Δᵢ < 0 → block i arrives early by |Δᵢ|.

The examples show how varying C or a moves the last block from late, to on-time, early or, in the worst case - late and dropped.

Example 1: Block Arrives 110ms Early

Scenario Setup

  • 0 network latency between all nodes
  • Blocks don't reach CPU limit (m), so they take full production time (w-a)
  • Block completion and signing take zero time

BP-A settings:

SymbolValue
e120 ms
a = e / r10 ms
h5 ms
n0 ms
Δi=Cia    Δi=10ms10i\boxed{\Delta_i = C - i\,a} \;\Longrightarrow\; \Delta_i = 10\,\text{ms} - 10\,i

Diagram (Blocks 1 & 12)

Timeline Table

Block iBP-A sends @Arrives BP-B @Offset Δᵢ
1t₁ − 10 mst₁0 ms (on-time)
2t₂ − 20 mst₂ − 10 ms10 ms early
11t₁₁ − 110 mst₁₁ − 100 ms100 ms early
12t₁₂ − 120 mst₁₂ − 110 ms110 ms early

Example 2: Block Arrives 80ms Early

Scenario Setup

  • 0 latency between BP-A ↔ BP-A Peer and BP-B Peer ↔ BP-B
  • 150ms latency between BP-A Peer ↔ BP-B Peer

BP-A settings:

SymbolValue
e240 ms
a = e / r20 ms
h5 ms
n150 ms
C = 2h + n160 ms
Δi=Cia    Δi=160ms20i\boxed{\Delta_i = C - i\,a} \;\Longrightarrow\; \Delta_i = 160\,\text{ms} - 20\,i

Diagram (Blocks 1 & 12)

Timeline Table

Block iBP‑A sends @Arrives BP‑B @Offset Δᵢ
1t₁ − 20 mst₁ + 140 ms140 ms late
2t₂ − 40 mst₂ + 120 ms120 ms late
3t₃ − 60 mst₃ + 100 ms100 ms late
4t₄ − 80 mst₄ + 80 ms80 ms late
5t₅ − 100 mst₅ + 60 ms60 ms late
6t₆ − 120 mst₆ + 40 ms40 ms late
7t₇ − 140 mst₇ + 20 ms20 ms late
8t₈ − 160 mst₈on time
9t₉ − 180 mst₉ − 20 ms20 ms early
10t₁₀ − 200 mst₁₀ − 40 ms40 ms early
11t₁₁ − 220 mst₁₁ − 60 ms60 ms early
12t₁₂ − 240 mst₁₂ − 80 ms80 ms early

Example 3: Block Arrives 16ms Late and Gets Dropped

Scenario Setup

  • 200ms latency between BP-A Peer ↔ BP-B Peer

  • BP-A settings:

SymbolValue
e204 ms
a = e / r17 ms
h10 ms
n200 ms
C = 2h + n220 ms

Δi=Cia    Δi=220ms17i\boxed{\Delta_i = C - i\,a} \;\Longrightarrow\; \Delta_i = 220\,\text{ms} - 17\,i

Diagram (Blocks 1 & 12)


Timeline Table

Block iBP‑A sends @Arrives BP‑B @Offset Δᵢ
1t₁ − 17 mst₁ + 203 ms+203 ms late
11t₁₁ − 187 mst₁₁ + 33 ms+33 ms late
12t₁₂ − 204 mst₁₂ + 16 ms+16 ms late → dropped

Example 4: Full blocks are produced early

Scenario Setup

  • 0 network latency between BP-A & BP-A Peer and between BP-B Peer & BP-B.

  • 200ms network latency between BP-A Peer & BP-B Peer.

  • Assume all blocks are full as there are enough queued up unapplied transactions ready to fill all blocks.

  • Assume a block can be produced with 200ms worth of transactions in 225ms worth of time. There is overhead for producing the block.

  • BP-A settings:

SymbolValueDescription
e120 msEarly release time in ms (produce-block-offset)
a = e / r10 msEarly release time in ms per block
m200 msMax CPU payload per block (max_block_cpu_usage)
p = w − a490 msProduction window each slot
build time225 msReal time to build a full block (200 ms tx + 25 ms overhead)
c = min(m, w − a)200 msCPU billed for a full block
n200 msNetwork latency (BP-A Peer → BP-B Peer)
h10 msHeader-validation time at each peer
C= 2h + n220 msFixed travel cost per block

Diagram (Blocks 1 & 12)

Timeline Table

BP‑A starts each new block 225 ms and also shifts an extra 10 ms earlier per block.

Block iBP‑A send @Arrives BP‑B @Offset Δᵢ
1t₁ − 275 mst₁ − 55 ms55 ms early
2t₂ − 550 mst₂ − 330 ms330 ms early
3t₃ − 825 mst₃ − 605 ms605 ms early
11t₁₁ − 3025 mst₁₁ − 2805 ms2805 ms early
12t₁₂ − 3300 mst₁₂ − 3080 ms3080 ms early