Skip to main content

Consensus Clients

ClientVersionDateDB SizeRAMNotes
Teku24.8.0Sep 2024~84 GiB~10 GiB
Lighthouse4.5.0Jan 2024~130 GiB~5 GiB
Nimbus24.1.1Jan 2024~170 GiB~2 to 3 GiB
Prysm4.1.1Jan 2024~130 GiB~5 GiB
Lodestar1.13.0Jan 2024~130 GiB~8 GiB

Notes on disk usage

  • Teku, Nimbus, Lodestar, Prysm and Grandine continuously prune
  • Lighthouse can be resynced in minutes to bring space usage back down, with ./ethd resync-consensus
  • Lighthouse is working on tree states to continuously prune

Execution clients

For reference, here are disk, RAM and CPU requirements, as well as mainnet initial synchronization times, for different Ethereum execution clients.

Disk and RAM requirements

SSD and RAM use is after initial sync, when keeping up with head.

Please pay attention to the Version and Date. These are snapshots in time of client behavior. Initial state size increases over time, and execution clients are always working on improving their storage engines.

DB Size is shown with values for different types of nodes: Full, and different levels of expiry: Post-Merge history only; Post-Cancun history only; rolling expiry; aggressive expiry. "tbd" means I haven't gathered the data. "n/a" means the client does not support this expiry mode, yet.

ClientVersionDateDB FullDB Post-MergeDB Post-CancunDB RollingDB AggressiveRAMNotes
Geth1.15.11May 2025~1.2 TiB~830 GiBn/an/an/a~ 8 GiB
Nethermind1.36.0February 2026~1.1 TiB~740 GiB~600 GiB~240 GiBn/a~ 7 GiBWith HalfPath, can automatic online prune at ~350 GiB free
Besuv25.8.0August 2025~1.35 TiB~850 GiBn/atbd~290 GiB~ 10 GiB
Reth1.5.0July 2025~1.6 TiB~950 GiBtbdtbdtbd~ 9 GiB
Erigon3.0.3May 2025~1.0 TiB~650 GiBn/atbdtbdSee commentErigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less
Nimbus0.1.0-alphaMay 2025tbd755 GiBn/an/an/aWith Era1 import
Ethrex4.0.0October 2025n/a450 GiBn/an/an/a

Notes on disk usage

  • Reth, Besu, Geth, Erigon, Ethrex and Nimbus continously prune
  • Nethermind - DB size can be reduced when it grew too large, by online prune. Keep an eye on Paprika and Path work

Initial sync times

Please pay attention to the Version and Date. Newer versions might sync faster, or slower.

These are initial syncs of a node with a stated amount of history expiry. For clients that support it, snap sync was used; otherwise, full sync.

NB: All execution clients need to download state after getting blocks. If state isn't "in" yet, your sync is not done. This is a heavily disk latency dependent operation, which is why HDD cannot be used for a node.

For Geth, you will see "State heal in progress" after initial sync, which will persist for a few hours if latency is high-ish.

This should complete in under 4 hours. If it does not, or even goes on for a week+, you do not have sufficient latency for Geth to "catch up" with state.

Cache size default in all tests.

ClientVersionDateNode TypeTest SystemTime TakenNotes
Geth1.15.10April 2025FullOVH Baremetal NVMe~ 5 hours
Nethermind1.24.0January 2024FullOVH Baremetal NVMe~ 5 hoursReady to attest after ~ 1 hour
Nethermind1.36.0February 2026post-CancunNetcup RS G11~ 2 hoursReady to attest after ~ 1 hour
Besuv25.8.0August 2025post-mergeOVH Baremetal NVMe~ 13 hours
Erigon3.0.3May 2025post-mergeOVH Baremetal NVMe~ 2 hours
Rethbeta.1March 2024FullOVH Baremetal NVMe~ 2 days 16 hours
Nimbus0.1.0-alphaMay 2025FullOVH Baremetal NVME~ 5 1/2 daysWith Era1 import
Ethrex4.0.0October 2025post-mergeOVH Baremetal NVME~ 2 hours

Test Systems

Latency is what matters most to Ethereum clients. Measure it with sudo ioping -D -c 30 /dev/<ssd-device> during load. Ideally while running a client, but using an fio to generate synthetic load will also get you a ballpark figure. You'd want to be under 300 us max (microseconds, not milliseconds) for an Ethereum execution client. High latency negatively impacts attestation performance, and is particularly noticeable during sync committee duties.

IOPS is random read-write IOPS measured by fio with "typical" DB parameters, 150G file, without other processes running.

Specifically fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test. If the test shows it'd take hours to complete, feel free to cut it short once the IOPS display for the test looks steady.

150G was chosen to "break through" any caching strategems the SSD uses for bursty writes. Execution clients write steadily, and the performance of an SSD under heavy write is more important than its performance with bursty writes.

Servers have been configured with noatime and no swap to improve latency.

NameRAMSSD SizeCPUr/w IOPSr/w latencyNotes
OVH Baremetal NVMe32 GiB1.9 TBIntel Hexa177k/59k150us maxThis is in line with any good NVMe drive
Netcup RS G1196 GiB3 TB20 vCPU on an AMD 84-core400us avg / 1.1ms maxThis is an example of a system with storage that is fast enough to attest, but too slow to get best rewards

Getting better latency

Ethereum execution layer clients need decently low latency. IOPS can be used as a proxy for that. HDD will not be sufficient.

For cloud providers, here are some results for syncing Geth.

  • AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
  • Linode block storage, make sure to get NVMe-backed storage.
  • Netcup RS G11 works, but rewards are not optimal.
  • There are reports that Digital Ocean block storage is too slow, as of late 2021.
  • Strato V-Server is too slow as of late 2021.

Dedicated servers with NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below. OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.

For own hardware, we've seen three causes of high latency:

  • DRAMless or QLC SSD. Choose a "mainstream" SSD with TLC and DRAM. Enterprise / data center SSDs will always work great; consumer SSDs vary.
  • Overheating of the SSD. Check smartctl -x. You want the SSD to be at ~ 40-50 degrees Celsius, so it does not throttle.
  • Hardware RAID, no TRIM support. Flash the controller to HBA and use software RAID.