This document explains how the Evolve Sync Service works, its dependencies, and its architecture.
Evolve's Sync Service is responsible for synchronizing block headers and data across nodes in the Evolve network. It's a critical component that allows nodes to maintain a consistent view of the blockchain state.
The sync mechanism consists of two main components:
- Header Sync Service - responsible for synchronizing block headers
- Data Sync Service - responsible for synchronizing block data (transactions and metadata)
Both services are instances of the same generic SyncService and differ only by the header type they specialize (signed headers vs. block data).
graph TD
subgraph "Node"
Exec["Block Executor (aggregator nodes)"]
Sync["Block Syncer (follower nodes)"]
HSS[Header Sync Service]
DSS[Data Sync Service]
P2P[P2P Client]
Store[(Shared Store)]
Exec -->|WriteToStoreAndBroadcast| HSS
Exec -->|WriteToStoreAndBroadcast| DSS
HSS -->|Persist| Store
DSS -->|Persist| Store
Store -->|Load next block| Sync
Sync -->|Republish DA data| HSS
Sync -->|Republish DA data| DSS
HSS <-->|P2P| P2P
DSS <-->|P2P| P2P
end
subgraph "DA Layer"
DAL[Data Availability Layer]
end
Exec -.->|Submit headers/data| DAL
Sync -->|Retrieve blobs| DAL
subgraph "Other Nodes"
ON[Other Nodes]
end
P2P <-->|Gossip| ON
A generic service that implements both header and data synchronization using type parameters:
classDiagram
class SyncService~H~ {
+config Config
+logger Logger
+syncType syncType
+genesis *GenesisDoc
+p2p *p2p.Client
+ex *Exchange~H~
+sub *Subscriber~H~
+p2pServer *ExchangeServer~H~
+store *Store~H~
+syncer *Syncer~H~
+syncerStatus *SyncerStatus
+Store() *Store~H~
+WriteToStoreAndBroadcast(ctx, headerOrData) error
+Start(ctx) error
+Stop(ctx) error
}
DataSyncService --|> SyncService : H = *types.Data
HeaderSyncService --|> SyncService : H = *types.SignedHeader
node/full.gowires the header and data services into both aggregator and follower nodes (seeinitHeaderSyncServiceandinitDataSyncService).- Both services wrap a go-header
Storeinstance that is prefixed per sync type, allowing them to share disk state while keeping namespaces separate. WriteToStoreAndBroadcast(also used by the block executor) ensures the store is initialized with genesis data, starts the go-header syncer once viaSyncerStatus, and gossips new items through libp2p.- When the node runs in follower mode, the go-header syncer fills the store from peers; when running as an aggregator, locally produced blocks flow through the same method.
Follower nodes construct the block syncer to hydrate local state from the shared go-header stores and the DA layer. The syncer owns two long-lived goroutines that coordinate incoming events and outbound fetches.
- Listens on
heightInChfor newDAHeightEventvalues sourced from P2P or DA. - Uses the in-memory cache to de-duplicate and park out-of-order heights.
- Calls
trySyncNextBlockto execute the next block when header and data are available.
flowchart TD
Start[Start] --> Select{select}
Select -->|"ctx.Done()"| Stop[Stop]
Select -->|heightInCh| Handle[processHeightEvent]
Handle --> Start
daWorker– waits for genesis, then continually invokestryFetchFromDA, respecting backoff windows and bumping the tracked DA height as blobs arrive.p2pWorker– polls the go-header stores, and when they advance past the local height, hands the range toProcessHeaderRange/ProcessDataRangeso watcher goroutines can block on the next missing height.pendingWorker– periodically drains cached events viaprocessPendingEvents, ensuring gaps filled from disk or DA are replayed promptly.- All three workers honor
ctx.Done()and share the syncer’s wait group, so shutting down the syncer or test harness simply cancels the context and waits for the loops to exit.
processPendingEventsreplays cached events once the next height becomes available.trySyncNextBlockvalidates, executes, and persists the block via the execution client and shared store.- Persists the block through a
store.Storebatch, bumps height/state, and marks headers/data as seen to enforce sequential progress and metrics updates.
- Persists the block through a
tryFetchFromDAmanages DA backoff windows and advancesdaHeighton success.tryFetchFromP2Preads the latest height from both header and data go-header stores, enqueueing any ranges the node has not yet processed.
The block syncer relies on a handful of queues and shared stores to keep the node in sync:
heightInCh– Buffered queue that carriescommon.DAHeightEventvalues from both the P2P handler and DA retriever intoprocessLoop.cache.Manager– In-memory structure that tracks pending events and deduplicates headers/data that arrive out of order.headerStore/dataStore– go-header stores exposed by the sync services. Aggregators append to them when producing blocks; followers poll them intryFetchFromP2Pto learn about new ranges.errorCh– Channel surfaced to the higher-level block components so critical execution failures inside the syncer can halt the node cleanly.
- Aggregator executors call
WriteToStoreAndBroadcast, or remote peers gossip new headers and block data through the sync services. SyncServiceinstances persist the payload in the prefixed go-header stores and broadcast it over libp2p.- Follower syncers observe the updated store heights, fetch any missing data via P2P or the DA layer, and enqueue events on
heightInCh. - The syncer executes the block via the execution client, writes it to
store.Storeusing a batch, updates in-memory state, and records metrics. - For DA-sourced events, the syncer republishes the block by calling
WriteToStoreAndBroadcaston the header and data services (block/internal/syncing/syncer.go:389-392) so gossip peers stay updated. - Successfully applied blocks are now visible to both the local node and the sync services, keeping aggregator and follower paths in sync.
- Aggregator nodes additionally submit headers/data to the DA layer for finality.
The sync package is consumed by both the block executor (aggregator mode) and the block syncer (follower mode):
- Aggregator nodes –
node/full.go:101constructsblock.NewAggregatorComponents, which in turn createsblock/internal/executing.Executor. After the executor commits a block, it callsWriteToStoreAndBroadcaston the header and data services (block/internal/executing/executor.go:415). This persists the block in the shared store and gossips it to peers. - Follower nodes –
node/full.go:116buildsblock.NewSyncComponents, wiring the same sync services intoblock/internal/syncing.Syncer. The syncer consumes updates written by the services (block/internal/syncing/syncer.go:77) and merges them with DA retrieval to hydrate local state. - Common broadcaster contract – Both block paths depend only on the slim
block/internal/common.Broadcasterinterface, so alternate sync implementations can be plugged in as long as they exposeWriteToStoreAndBroadcastandStore. - Execution engine boundary – Because the sync services operate on generic header types, swapping execution engines only requires satisfying the
core/execution.Executorinterface; the sync plumbing remains unchanged.
github.com/ipfs/go-datastore- Core datastore interfacegithub.com/ipfs/go-datastore/sync- Thread-safe datastore wrappergithub.com/celestiaorg/go-header- Header synchronization librarygithub.com/libp2p/go-libp2p- P2P networking stackgithub.com/evstack/ev-node/pkg/da/types- Data Availability interface and types
sequenceDiagram
participant P as Proposer Node
participant DA as Data Availability Layer
participant FN as Full Node
P->>P: Create Block
P->>DA: Submit Header to DA
P->>FN: Broadcast Header (P2P)
P->>FN: Broadcast Data (P2P)
FN->>FN: Receive Header via P2P
FN->>FN: Store Header
FN->>FN: Receive Data via P2P
FN->>FN: Store Data
FN->>FN: Validate and Apply Block
alt Header missing
FN->>DA: Retrieve Header
end
alt Data missing
FN->>P: Request Data via P2P
end