Skip to main content

Consensus Clients

ClientVersionDateDB SizeRAMNotes
Teku24.8.0Sep 2024~84 GiB~10 GiB
Lighthouse4.5.0Jan 2024~130 GiB~5 GiB
Nimbus24.1.1Jan 2024~170 GiB~2 to 3 GiB
Prysm4.1.1Jan 2024~130 GiB~5 GiB
Lodestar1.13.0Jan 2024~130 GiB~8 GiB

Notes on disk usage

  • Teku, Nimbus, Lodestar and Grandine continuously prune
  • Lighthouse and Prysm can be resynced in minutes to bring space usage back down, with ./ethd resync-consensus
  • Lighthouse is working on tree states to continuously prune

Execution clients

For reference, here are disk, RAM and CPU requirements, as well as mainnet initial synchronization times, for different Ethereum execution clients.

Disk, RAM, CPU requirements

SSD, RAM and CPU use is after initial sync, when keeping up with head. 100% CPU is one core.

Please pay attention to the Version and Date. These are snapshots in time of client behavior. Initial state size increases over time, and execution clients are always working on improving their storage engines.

DB Size if not specified is for a full node. If you see two values like " 1.2 TiB / 820 GiB", it's for a full node and a node with pre-merge history expiry, respectively.

ClientVersionDateDB SizeDB GrowthRAMNotes
Geth1.15.11May 2025~1.2 TiB / ~830 GiB~7-8 GiB / week~ 8 GiB
Nethermind1.31.10May 2025~1.1 TiB / ~740 GiB~11 GiB / week~ 7 GiBWith HalfPath, can automatic online prune at ~350 GiB free
Besuv25.4.1May 2025~1.35 TiB / ~865 GiB~7-8 GiB / week~ 10 GiB
Reth1.3.12May 2025~1.6 TiB / ??~ 7-8 GiB / week~ 9 GiB
Erigon3.0.3May 2025~1.0 TiB / ~650 GiB~7-8 GiB / weekSee commentErigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less
Nimbus0.1.0-alphaMay 2025?? / 755 GiB????With Era1 import

Notes on disk usage

  • Reth, Besu, Geth, Erigon and Nimbus continously prune
  • Nethermind - DB size can be reduced when it grew too large, by online prune. Keep an eye on Paprika and Path work

Initial sync times

Please pay attention to the Version and Date. Newer versions might sync faster, or slower.

These are initial syncs of a full node without history expiry. For clients that support it, snap sync was used; otherwise, full sync.

NB: All execution clients need to download state after getting blocks. If state isn't "in" yet, your sync is not done. This is a heavily disk latency dependent operation, which is why HDD cannot be used for a node.

For Geth, you will see "State heal in progress" after initial sync, which will persist for a few hours if latency is high-ish.

This should complete in under 4 hours. If it does not, or even goes on for a week+, you do not have sufficient latency for Geth to "catch up" with state.

Cache size default in all tests.

ClientVersionDateTest SystemTime TakenNotes
Geth1.15.10Apr 2025OVH Baremetal NVMe~ 5 hours
Nethermind1.24.0Jan 2024OVH Baremetal NVMe~ 5 hoursReady to attest after ~ 1 hour
Besuv25.4.1May 2025OVH Baremetal NVMe~ 16 hoursWith history expiry
Erigon3.0.3 with expiry PRMay 2025OVH Baremetal NVMe~ 2 hoursWith history expiry
Rethbeta.1March 2024OVH Baremetal NVMe~ 2 days 16 hours
Nimbus0.1.0-alphaMay 2025OVH Baremetal NVME~ 5 1/2 daysWith Era1 import

Test Systems

IOPS is random read-write IOPS measured by fio with "typical" DB parameters, 150G file, without other processes running.

Specifically fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test. If the test shows it'd take hours to complete, feel free to cut it short once the IOPS display for the test looks steady.

150G was chosen to "break through" any caching strategems the SSD uses for bursty writes. Execution clients write steadily, and the performance of an SSD under heavy write is more important than its performance with bursty writes.

Read and write latencies can be measured with sudo ioping -D -c 30 /dev/<ssd-device> during the fio.

Servers have been configured with noatime and no swap to improve latency.

NameRAMSSD SizeCPUr/w IOPSr/w latencyNotes
OVH Baremetal NVMe32 GiB1.9 TBIntel Hexa177k/59k

Getting better latency

Ethereum execution layer clients need decently low latency. IOPS can be used as a proxy for that. HDD will not be sufficient.

For cloud providers, here are some results for syncing Geth.

  • AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
  • Linode block storage, make sure to get NVMe-backed storage.
  • Netcup is sufficient as of late 2021.
  • There are reports that Digital Ocean block storage is too slow, as of late 2021.
  • Strato V-Server is too slow as of late 2021.

Dedicated servers with SATA or NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below. OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.

For own hardware, we've seen three causes of high latency:

  • DRAMless or QLC SSD. Choose a "mainstream" SSD with TLC and DRAM. Enterprise / data center SSDs will always work great; consumer SSDs vary.
  • Overheating of the SSD. Check smartctl -x. You want the SSD to be at ~ 40-50 degrees Celsius, so it does not throttle.
  • Hardware RAID, no TRIM support. Flash the controller to HBA and use software RAID.