Published on
· 18 min read

Execution Events SDK: Ultra Low-Latency On-Chain Events

Authors

Introduction

⚠️

Warning

This blog post contains example code to demonstrate Monad features and concepts. The examples illustrate how to use the features, in their simplest form.

These implementations are educational examples, and have not been tested or audited. They are likely to have significant errors and security vulnerabilities. They should not be relied on for any purpose. Do not use these examples in a production environment without completing your own audits and application of best practices.

Introducing the Monad Execution Events SDK, a toolkit for processing real-time on-chain data, and Monode, an example application leveraging this tool.


Contents

Background

Through many technical innovations, Monad achieves performance orders of magnitude greater than legacy blockchains, while maintaining full EVM compatibility and strong decentralization.

This enables:

  • Low transaction fees, responsive UX, and global scale
  • Seamless portability from existing EVM applications using familiar tooling

Monad also aims to deliver best-in-class interfaces for developers and power users to receive updates about on-chain activity. Developers building optimized systems shouldn't have to poll via REST or subscribe to a WebSocket stream; there should be a better way.

Category Labs released the Monad Execution Events SDK to solve this - providing ultra-low latency access to execution events by reading directly from a Monad node's shared memory.

The Monad Foundation engineering team built Monode - a real-time web application that visualizes block statuses and execution times, network activity, token swaps and transfers, and hot addresses - to demonstrate the SDK.

This post covers the implementation details.

How the SDK Works

A Monad full node outputs execution events as blocks are executed in the daemon to a shared memory ring buffer.

A program utilizing the Execution Events SDK runs in a sidecar process on the same machine, reading directly from this ring buffer. This sidesteps network latency entirely, enabling the lowest possible ingestion latency.

Backend Architecture

The Monode application architecture can be summarized in the following diagram:

exec-events-app-architecture

The Full Node, Event Ring and Backend components each reside on the same machine. Together, these components power the Monode application. In this architecture, the Backend acts as an intermediary between the execution layer and the client: it consumes a high-volume stream of raw execution events, applies filtering and aggregation logic, and forwards a curated, client-friendly stream over WebSockets.

Full Node

The Full Node runs a Monad Mainnet full node process, which includes the MonadBFT and Monad Execution daemon processes.

MonadBFT receives Proposed blocks from the broader network and sends valid block proposals to Monad Execution. Upon receiving a block proposal, Monad Execution begins to speculatively execute it and outputs the resulting execution events to the Event Ring

Event Ring

The Event Ring is a pre-allocated region of memory external to the full node process. Think of it as a fixed-size, circular array.

As Monad Execution executes a block proposal, it writes events resulting from the execution to this region of memory. In parallel, the Backend process reads data from the Event Ring using the SDK.

Backend

The Backend is a distinct process which runs alongside the Full Node, serving as the layer where the example application's business logic resides. It consists of two subcomponents: the Event Listener and the WebSocket Server.

Event Listener

The Event Listener is dedicated to reading events from the Event Ring and forwarding them to the WebSocket Server over a tokio::sync::mpsc::channel.

This task leverages the SDK to efficiently parse and format the event data for the WebSocket Server.

WebSocket Server

The WebSocket Server receives events from the Event Listener and forwards them to connected clients.

The current implementation sends a subset of the available events to power the frontend.

Event Types

The execution daemon emits several event types during block execution. The Monode application uses a subset of the available events to power the frontend.

These events arrive in approximately the following order:

  • BlockStart: Execution has received a new block proposal from the network and is now in the Proposed state
  • TxnHeaderStart: A transaction has begun execution
  • AccountAccess: A transaction has accessed an account
  • StorageAccess: A transaction has accessed a storage slot
  • TxnEvmOutput: A transaction has completed execution and its outputs have been computed
  • TxnLog: A transaction has emitted a log
  • TxnCallFrame: A transaction has called an account
  • TxnEnd: A transaction and its outputs have been fully processed
  • BlockEnd: A block proposal (and all of its constituent transactions) has completed execution
  • BlockQC: A proposed block has passed the first round of voting in the network and is now in the Voted state
  • BlockFinalized: A voted block has been added to the canonical blockchain and is now in the Finalized state
  • BlockVerified: A finalized block's Merkle root has been verified by the network and is now in the Verified state

Each of these events include a payload with details related to the event.

The backend wraps each event in the following envelope before sending to the client:

pub struct EventData {
    pub timestamp_ns: u64,
    pub event_name: EventName,
    pub seqno: u64,
    pub block_number: Option<u64>,
    pub txn_idx: Option<usize>,
    pub txn_hash: Option<[u8; 32]>,
    pub payload: ExecEvent,
}

The envelope provides additional contextual information for each event. The event payload itself is stored in the payload attribute.

Frontend Components

Network Activity

exec-events-app-network-activity

The Network Activity component displays live transactions-per-second (TPS), calculated block-by-block using the SDK.

Events

Live TPS is derived from two execution events:

  • BlockStart
  • TxnHeaderStart

Backend Implementation

Live TPS is computed on the backend and published to clients.

The backend instantiates a TPSTracker struct, which maintains a rolling window of transaction counts for the previous 3 blocks, plus the count for the currently executing block. When a TxnHeaderStart event is witnessed, the tracker increments the current block's count. When a BlockStart event is witnessed, the tracker finalizes the current block, shifts the window, computes TPS, and resets for the new block.

The TPS formula sums transactions from the two most recent complete blocks plus half of the current block. With Monad's ~400ms block time, this 2.5-block window approximates 1 second of activity.

#[derive(Default)]
struct TPSTracker {
    // Rolling window of the 3 most recent blocks
    block_1_txs: usize,
    block_2_txs: usize,
    block_3_txs: usize,
    current_tx_count: usize,
}

impl TPSTracker {
    pub fn new() -> Self {
        Self::default()
    }

    /// Increment tx count for the current block
    pub fn record_tx(&mut self) {
        self.current_tx_count += 1;
    }

    /// Shift the rolling window and compute TPS
    /// Called when a new block starts execution
    pub fn advance_block_and_get_tps(&mut self) -> usize {
        self.block_1_txs = self.block_2_txs;
        self.block_2_txs = self.block_3_txs;
        self.block_3_txs = self.current_tx_count;
        self.current_tx_count = 0;
        // 2.5 blocks ≈ 1 second at 400ms block time
        self.block_1_txs + self.block_2_txs + (self.block_3_txs / 2)
    }
}

let mut tps_tracker = TPSTracker::new();

loop {
    if let Ok(event_data) = event_receiver.recv() {
        let mut tps: Option<usize> = None;

        match event_data.event_name {
            EventName::TxnHeaderStart => {
                tps_tracker.record_tx();
            }
            EventName::BlockStart => {
                tps = Some(tps_tracker.advance_block_and_get_tps());
            }
            _ => {}
        }

        if let Some(tps) = tps {
            let _ = event_broadcast_sender.send(EventDataOrMetrics::TPS(tps));
        }
    }
}

Block States

exec-events-app-block-states

The Block States component shows blocks transitioning between the various commitment states over the course of MonadBFT consensus:

  • Proposed
  • Voted
  • Finalized
  • Verified

MonadBFT's speculative finality enables developers to build more responsive applications.

Events

To render this component, the client subscribes to the following events:

  • BlockStart
  • BlockQC
  • BlockFinalized
  • BlockVerified

The corresponding payloads sent by the WebSocket server are as follows:

struct BlockStart {
    block_number: u64,
    block_id: B256,
    round: u64,
    epoch: u64,
    parent_eth_hash: B256,
    timestamp: u64,
    beneficiary: Address,
    gas_limit: u64,
    base_fee_per_gas: U256,
}

struct BlockQC {
    block_id: B256,
    block_number: u64,
    round: u64,
}

struct BlockFinalized {
    block_id: B256,
    block_number: u64,
}

struct BlockVerified {
    block_number: u64,
}

Client Implementation - First Version

In the happy path, the client subscribes to these events and simply updates the block commitment state on the UI as the notifications are sent:

const blockNumberToCommitment = new Map()

function onBlockEvent(event) {
  switch (event.event_name) {
    case 'BlockStart':
      blockNumberToCommitment.set(event.payload.block_number, 'Proposed')
      break
    case 'BlockQC':
      blockNumberToCommitment.set(event.payload.block_number, 'Voted')
      break
    case 'BlockFinalized':
      blockNumberToCommitment.set(event.payload.block_number, 'Finalized')
      break
    case 'BlockVerified':
      blockNumberToCommitment.set(event.payload.block_number, 'Verified')
      break
    default:
      break
  }
}

Client Implementation - Handling Speculative Finality

Notice the block_id field included in the BlockStart, BlockQC, and BlockFinalized payloads. This field uniquely identifies a block proposal as it transitions through the various commitment states. Multiple blocks can exist at the same height in the Proposed and Voted states, though only one will reach Finalized in the canonical chain. To handle this effectively, we track speculative blocks by their block_id rather than block_number. Once a BlockFinalized event is received, the corresponding block_id becomes the canonical block, and any other speculative blocks at the same height are discarded. The client can then revert to referencing blocks by block_number.

The updated client code should be similar to the following:

const blockNumberToBlockIds = new Map()

function onBlockEvent(event) {
  let blockIds
  let blockId
  switch (event.event_name) {
    case 'BlockStart':
      blockIds = blockNumberToBlockIds.get(event.payload.block_number) || []
      blockIds.push({
        blockId: event.payload.block_id,
        commitment: 'Proposed',
      })

      blockNumberToBlockIds.set(event.payload.block_number, blockIds)
      break
    case 'BlockQC':
      blockIds = blockNumberToBlockIds.get(event.payload.block_number)
      for (let i = 0; i < blockIds.length; ++i) {
        if (blockIds[i]['blockId'] == event.payload.block_id) {
          blockIds[i].commitment = 'Voted'
          break
        }
      }

      blockNumberToBlockIds.set(event.payload.block_number, blockIds)
      break
    case 'BlockFinalized':
      blockIds = blockNumberToBlockIds.get(event.payload.block_number)[blockId] = blockIds.filter(
        (x) => x.blockId == event.payload.block_id
      )

      blockId.commitment = 'Finalized'
      blockNumberToBlockIds.set(event.payload.block_number, [blockId])
      break
    case 'BlockVerified':
      blockId = blockNumberToBlockIds.get(event.payload.block_number)[0]
      blockId.commitment = 'Verified'

      blockNumberToBlockIds.set(event.payload.block_number, [blockId])
      break
    default:
      break
  }
}

Block Execution Times

exec-events-app-block-execution-times

The Block Execution Times component shows the amount of time it takes for the Monad Execution daemon to execute a block and its transactions end-to-end.

Events

To render this component, the client subscribes to the following events:

  • BlockStart
  • BlockEnd
  • TxnHeaderStart
  • TxnEnd
  • TxnEvmOutput (for gas used/status information)

The corresponding payloads sent by the WebSocket server are as follows:

struct BlockStart {
    block_number: u64,
    block_id: B256,
    round: u64,
    epoch: u64,
    parent_eth_hash: B256,
    timestamp: u64,
    beneficiary: Address,
    gas_limit: u64,
    base_fee_per_gas: U256,
}

struct BlockEnd {
    eth_block_hash: B256,
    state_root: B256,
    receipts_root: B256,
    logs_bloom: Bytes,
    gas_used: u64,
}

struct TxnHeaderStart {
    txn_index: usize,
    txn_hash: B256,
    sender: Address,
    txn_type: u8,
    chain_id: U256,
    nonce: u64,
    gas_limit: u64,
    max_fee_per_gas: U256,
    max_priority_fee_per_gas: U256,
    value: U256,
    data: Bytes,
    to: Address,
    is_contract_creation: bool,
    r: U256,
    s: U256,
    y_parity: bool,
    access_list_count: u32,
    auth_list_count: u32,
}

struct TxnEnd;

Parallel EVM Benefits

In traditional EVM implementations, total block execution time is roughly equal to the sum of the individual transaction execution times. However, thanks to Monad's parallel EVM implementation, this is not necessarily true; it is often the case that the execution time of a block is less than the sum of the execution times of each transaction within the block.

By comparing these two measurements, we can quantify the performance gains from parallel execution.

Client Implementation

To calculate the impact of parallel execution, the client should do the following:

  • Calculate the execution time of the block as a whole, using the timestamps of the BlockStart and the BlockEnd events
  • Calculate the execution time of individual transactions in the block using the timestamps of the TxnHeaderStart and the TxnEnd events, and summing these together

The difference between the block's total execution time and the sum of the constituent transaction execution times directly implies the performance impact of parallel execution.

const execState = {
  blockStartNs: BigInt(0),
  txnIndexToStartTimestampNs: new Map(),
  txnTotalExecutionTimeNs: BigInt(0),
}

function onBlockExecutionEvent(event) {
  switch (event.event_name) {
    case 'BlockStart':
      execState.blockStartNs = BigInt(event.timestamp_ns)
      break
    case 'TxnHeaderStart':
      execState.txnIndexToStartTimestampNs.set(event.payload.txn_index, BigInt(event.timestamp_ns))
      break
    case 'TxnEnd':
      const startTimestampNs = execState.txnIndexToStartTimestampNs.get(event.txn_idx)
      const txnElapsedNs = BigInt(event.timestamp_ns) - startTimestampNs

      execState.txnTotalExecutionTimeNs += txnElapsedNs
      execState.txnIndexToStartTimestampNs.delete(event.txn_idx)
      break
    case 'BlockEnd':
      const blockExecutionTimeNs = BigInt(event.timestamp_ns) - execState.blockStartNs
      const parallelExecutionSavingsNs = execState.txnTotalExecutionTimeNs - blockExecutionTimeNs

      console.log(`Block execution time: ${blockExecutionTimeNs}ns`)
      console.log(`Total txn execution time: ${execState.txnTotalExecutionTimeNs}ns`)
      console.log(`Parallel execution savings: ${parallelExecutionSavingsNs}ns`)

      execState.txnTotalExecutionTimeNs = BigInt(0)
      break
    default:
      break
  }
}

There is one edge case to be aware of: when a block is empty, the block execution time will always be greater than the sum of the transaction execution times. This is because an empty block has zero transaction execution time but still incurs fixed overhead from the execution daemon.

Empty blocks don't benefit from parallel execution, in any case.

Swap & Transfer Tracker

exec-events-app-swap-and-transfer-tracker

The Swap & Transfer Tracker panel shows real-time events for a few core economic activities on Monad:

  • Native MON and wrapped MON transfers
  • MON/AUSD swaps on particular DEX pools
  • All swap route hops for four DEX aggregators

Events

Native MON Transfers

The TxnCallFrame event is emitted by the execution daemon whenever an EVM account or contract is called during transaction execution.

The event payload is as follows:

struct TxnCallFrame {
    txn_index: usize,
    depth: u32,
    caller: Address,
    call_target: Address,
    value: U256,
    input: Bytes,
    output: Bytes,
}

A native MON transfer triggers a TxnCallFrame event with a non-zero value field. This event stream includes all contract calls, regardless of the value transferred. To reduce traffic between our backend and each frontend listener, we applied some filtering.

Since we are only interested in native MON transfers, the Backend defines a custom NativeTransfer event which only sends a message to the client if:

  1. A TxnCallFrame event is witnessed
  2. The TxnCallFrame event has a non-zero value field

This produces a filtered stream of native MON transfers only, relieving pressure on the client.

Wrapped MON Transfers

WMON is implemented as an ERC20 contract. Transfers are detected via the Transfer event log:

event Transfer(address indexed from, address indexed to, uint256 value);

To listen for these events, the client subscribes to the TxnLog event which emits the following payload:

struct TxnLog {
    txn_index: usize,
    log_index: u32,
    address: Address,
    topics: Bytes,
    data: Bytes,
}

As with the TxnCallFrame case above, the Backend allows clients to filter TxnLog events by contract address and log topics, in the similar way as Ethereum JSON-RPC API. Given the event ABI, clients can decode the log data using popular Ethereum client libraries.

DEX Swaps

Similar to WMON transfers, DEX swaps are detected via event logs.

The Monode application listens for swaps on the following protocols:

  • DEX Pools
    • Uniswap V4 MON/AUSD
    • PancakeSwap V3 MON/AUSD
  • Aggregators (all routes)
    • Monorail
    • Kuru Flow
    • KyberSwap
    • OpenOcean

Each protocol emits its own swap event with varying fields for amounts, fees, and routing info. Clients can decode these using standard EVM log decoding with libraries like ethers.js or viem: compute the event signature hash, match against topics[0], then ABI-decode the indexed and non-indexed parameters.

Event signatures by protocol
Uniswap V4
event Swap(
  bytes32 indexed id,
  address indexed sender,
  int128 amount0,
  int128 amount1,
  uint160 sqrtPriceX96,
  uint128 liquidity,
  int24 tick,
  uint24 fee
);
PancakeSwap V3
event Swap(
  address indexed sender,
  address indexed recipient,
  int256 amount0,
  int256 amount1,
  uint160 sqrtPriceX96,
  uint128 liquidity,
  int24 tick,
  uint128 protocolFeesToken0,
  uint128 protocolFeesToken1
);
Monorail
event Aggregated(
  address indexed sender,
  address indexed tokenIn,
  address indexed tokenOut,
  uint256 amountIn,
  uint256 amountOut,
  uint256 protocolFeeAmount,
  uint256 referrerFeeAmount,
  uint64 referrer,
  uint64 quote
);
Kuru Flow
event KuruFlowSwap(
  address indexed user,
  address indexed referrer,
  address tokenIn,
  address tokenOut,
  bool isFeeInInput,
  uint256 amountIn,
  uint256 amountOut,
  uint256 referrerFeeBps,
  uint256 totalFeeBps
);
KyberSwap
event Swapped(
  address sender,
  address srcToken,
  address dstToken,
  address dstReceiver,
  uint256 spentAmount,
  uint256 returnAmount
);
OpenOcean
event Swapped(
  address indexed sender,
  address indexed srcToken,
  address indexed dstToken,
  address dstReceiver,
  uint256 amount,
  uint256 spentAmount,
  uint256 returnAmount,
  uint256 minReturnAmount,
  uint256 guaranteedAmount,
  address referrer
);

Hot Accounts and Hot Storage Slots Bubblemaps

exec-events-app-hot-accounts-and-storage-slots-bubblemaps

The Hot Accounts and Hot Storage Slots Bubblemaps show the most frequently accessed accounts and storage slots across the chain in the last 5 minutes.

Events

For the Hot Accounts Bubblemap, the AccountAccess event is used.

For the Hot Storage Slots Bubblemap, the StorageAccess event is used.

Abstraction from the Client

As chain usage grows, tracking every accessed account and storage slot becomes infeasible for clients due to increasing memory requirements. Furthermore, at Monad's throughput, sending an unfiltered stream of these events would quickly overwhelm client connections. Rather than sending a raw stream of events, the backend employs a probabilistic algorithm which aggregates these events and sends "snapshots" of the highly contested accounts and slots.

Backend Implementation

The backend implements a version of the Misra-Gries summary streaming algorithm to address these constraints. This algorithm enables the backend to approximate the relative frequencies of items (accounts or slots) using a fixed amount of memory.

Frequency Algorithm - Pseudocode

Let N be the maximum number of items to track in memory.

  1. Create an empty map of fixed size N where:

    • The key is one of:
      • An account address (for the accounts bubblemap)
      • A (account_address, storage_slot_position) tuple (for the storage slots bubblemap)
    • The value is a counter for the occurrences of the corresponding key
  2. For each event in the event stream:

    • If the key exists in the map, increment the associated counter
    • Else, if the map contains less than N entries, add the term to the map with a counter of one
    • Else, if the map has N entries already, decrement the counter in every existing entry
      • If any counter reaches zero during this process, remove it from the map
      • If any item was removed from the above step, add the new term to the map with a counter of one

Frequency Algorithm - Rust Implementation

pub struct AccessEntry<T> {
    pub key: T,
    pub count: u64,
}

pub struct TopKTracker<T> {
    /// Maximum number of items to track
    capacity: usize,
    /// Map of item -> count
    counts: HashMap<T, u64>,
}

impl<T: Hash + Eq + Clone> TopKTracker<T> {
    /// Create a new TopKTracker with the given capacity
    pub fn new(capacity: usize) -> Self {
        Self {
            capacity,
            counts: HashMap::new(),
        }
    }

    /// Record an occurrence of an item
    pub fn record(&mut self, item: T) {
        if let Some(count) = self.counts.get_mut(&item) {
            // Item already tracked, increment its count
            *count += 1;
        } else if self.counts.len() < self.capacity {
            // Still have space, add new item
            self.counts.insert(item, 1);
        } else {
            // At capacity - use Misra-Gries algorithm
            // Decrement all counts and remove zeros
            let mut to_remove = Vec::new();
            for (key, count) in self.counts.iter_mut() {
                *count = count.saturating_sub(1);
                if *count == 0 {
                    to_remove.push(key.clone());
                }
            }

            // Remove items with zero count
            for key in to_remove {
                self.counts.remove(&key);
            }

            if self.counts.len() < self.capacity {
                // Add the new item if new space was created
                self.counts.insert(item, 1);
            }
        }
    }

    /// Get the top k items by count
    pub fn top_k(&self, k: usize) -> Vec<AccessEntry<T>> {
        let mut items: Vec<_> = self.counts.iter().map(|(k, v)| AccessEntry {
            key: k.clone(),
            count: *v
        }).collect();

        // Sort by count descending
        items.sort_by(|a, b| b.count.cmp(&a.count));

        // Take top k
        items.truncate(k);
        items
    }

    pub fn reset(&mut self) {
        self.counts.clear();
    }
}

The Rust implementation is templated to generically support both the account and storage slot tuple keys as described above.

At program start, the WebSocket server initializes empty TopKTracker instances for each of these access types and sends aggregated snapshots of the top 10 entries at the end of each block. This approach delivers the desired metrics without overwhelming client connections.

The code is as follows:

pub struct TopAccessesData {
    pub account: Vec<AccessEntry<Address>>,
    pub storage: Vec<AccessEntry<(Address, B256)>>,
}

let mut account_accesses = TopKTracker::<Address>::new(10_000);
let mut slot_accesses = TopKTracker::<(Address, B256)>::new(10_000);

let mut accesses_reset_interval = tokio::time::interval(std::time::Duration::from_mins(5));

loop {
    tokio::select! {
        // Event received from the Event Listener
        event_data = event_receiver.recv() => {
            if let EventName::AccountAccess = event_data.event_name {
                // Record occurrence of an account access
                if let ExecEvent::AccountAccess {
                    account_access,
                    ..
                } = event_data.payload {
                    let address = Address::from_slice(&account_access.address.bytes);
                    account_accesses.record(address);
                } else {
                    unreachable!();
                }
            } else if let EventName::StorageAccess = event_data.event_name {
                // Record occurrence of a storage slot access
                if let ExecEvent::StorageAccess {
                    storage_access,
                    ..
                } = event_data.payload {
                    let address = Address::from_slice(&storage_access.address.bytes);
                    let key = B256::from_slice(&storage_access.key.bytes);
                    storage_accesses.record((address, key));
                } else {
                    unreachable!();
                }
            }

            // Send accesses update on BlockEnd events (after all access events are processed)
            let send_accesses_update = matches!(event_data.event_name, EventName::BlockEnd);

            let _ = event_broadcast_sender.send(EventDataOrAccesses::Event(event_data));

            if send_accesses_update {
                let top_accesses_data = TopAccessesData {
                    account: account_accesses.top_k(10),
                    storage: storage_accesses.top_k(10),
                };
                let _ = event_broadcast_sender.send(EventDataOrAccesses::TopAccesses(top_accesses_data));
            }
        },
        // Reset tick - clear access trackers every 5 minutes
        _ = accesses_reset_interval.tick() => {
            account_accesses.reset();
            storage_accesses.reset();
        }
    }
}

Conclusion

The Execution Events SDK enables a new class of real-time applications on Monad—from live dashboards to responsive trading interfaces—by eliminating the data retrieval bottleneck.

To get started:

Questions or feedback? Reach out on Discord or open an issue on GitHub.

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies.