Why do signer account balances deplete to zero in a multi-node Clique EVM chain, but not in a single-node setup?
I’m running a Clique-based EVM chain with the following setup:
- Two nodes, each with its own signer account.
- The signer accounts are assigned large balances in the
genesis.json
file. - The chain starts successfully, and the balances are confirmed to exist.
However, after a few hours, the balances of the signer accounts deplete to zero. This issue does not occur when running a single node with a single signer.
Configuration Files:
config.yaml
:
[Eth]
NetworkId = {{ chain_id }}
SyncMode = "full"
StateScheme = "hash"
NoPruning = true
NoPrefetch = false
DatabaseCache = 512
TrieCleanCache = 154
TrieDirtyCache = 256
TrieTimeout = 3600000000000
SnapshotCache = 102
Preimages = false
RPCGasCap = 50000000
RPCEVMTimeout = 5000000000
RPCTxFeeCap = 1e+00
[Eth.Miner]
GasFloor = 0
GasCeil = 30000000
GasPrice = 1000000000
Recommit = 2000000000
NewPayloadTimeout = 2000000000
[Eth.TxPool]
Locals = []
NoLocals = false
Journal = "transactions.rlp"
Rejournal = 3600000000000
PriceLimit = 1
PriceBump = 10
AccountSlots = 16
GlobalSlots = 5120
AccountQueue = 64
GlobalQueue = 1024
Lifetime = 10800000000000
[Eth.GPO]
Blocks = 20
Percentile = 60
MaxHeaderHistory = 1024
MaxBlockHistory = 1024
MaxPrice = 500000000000
IgnorePrice = 2
[Node]
HTTPPort = {{ geth_http_port }}
DataDir = "{{ chain_home_dir }}"
IPCPath = "geth.ipc"
HTTPHost = "0.0.0.0"
HTTPVirtualHosts = ["*"]
HTTPModules = ["net", "web3", "eth", "debug", "txpool"]
AuthAddr = "0.0.0.0"
AuthPort = 8551
AuthVirtualHosts = ["*"]
WSHost = "0.0.0.0"
WSPort = 8546
WSModules = ["net", "web3", "eth", "debug", "txpool"]
GraphQLVirtualHosts = ["*"]
[Node.P2P]
MaxPeers = 10
NoDiscovery = false
BootstrapNodes = [
"{{ enode0_id }}",
"{{ enode1_id }}"
]
BootstrapNodesV5 = []
StaticNodes = [
"{{ enode0_id }}",
"{{ enode1_id }}"
]
TrustedNodes = [
"{{ enode0_id }}",
"{{ enode1_id }}"
]
ListenAddr = ":{{ geth_p2p_port }}"
DiscAddr = ""
EnableMsgEvents = false
[Node.HTTPTimeouts]
ReadTimeout = 30000000000
ReadHeaderTimeout = 30000000000
WriteTimeout = 30000000000
IdleTimeout = 120000000000
[Metrics]
HTTP = "127.0.0.1"
Port = 9091
genesis.json
:
{
"config": {
"chainId": {{ chain_id }},
"homesteadBlock": 0,
"eip150Block": 0,
"eip155Block": 0,
"eip158Block": 0,
"byzantiumBlock": 0,
"constantinopleBlock": 0,
"petersburgBlock": 0,
"istanbulBlock": 0,
"berlinBlock": 0,
"londonBlock": 0,
"clique": {
"period": 5,
"epoch": 30000
}
},
"alloc": {
"ADDRESS1": {
"balance": "0x84595161401484A000000"
},
"ADDRESS2": {
"balance": "0x84595161401484A000000"
},
"ADDRESS3": {
"balance": "0x84595161401484A000000"
},
"ADDRESS4": {
"balance": "0x84595161401484A000000"
},
"ADDRESS5": {
"balance": "0x84595161401484A000000"
}
},
"coinbase": "0x0000000000000000000000000000000000000000",
"difficulty": "0x1",
"extradata": "0x{{ '0' * 64 }}{{ active_signers | replace(',', '') }}{{ '0' * 130 }}",
"gasLimit": "30000000",
"nonce": "0x0000000000000000",
"mixhash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00"
}
geth
Command:
geth --config /etc/chain/config.toml \
--http.vhosts '*' --http.corsdomain '*' \
--http --http.addr 0.0.0.0 --http.api eth,net,web3,debug,txpool \
--ws --ws.addr 0.0.0.0 \
--mine --miner.etherbase {{ etherbase }} \
--allow-insecure-unlock --password /etc/chain/password --unlock {{ etherbase }} \
--log.file /var/lib/chain/logs/chain.log --log.rotate \
--nodekey /opt/chain/nodekey
Observations:
- When running two nodes, the signer balances deplete to zero after a few hours.
- When running a single node, the issue does not occur.
- Logs show frequent block sealing failures with the error:
signed recently, must wait for others
.
Questions:
- Why does this issue occur only in a multi-node setup and not in a single-node setup?
- How can I prevent the signer balances from depleting while maintaining a multi-node Clique network?
- Are there any configuration changes or best practices to address this issue?
Any insights or suggestions would be greatly appreciated!