When node has reached max connections, Network will ignore pending incoming connections by:
1. not calling accept()
2. de-registering OP_ACCEPT 'interest op' on the listen socket's channel
When a peer disconnects, Network might re-register OP_ACCEPT interest op on listen socket.
Slight reworking of EPC to simplify when producer can block
and generally make some of the conditional code more readable.
Improved logging with task class names and logging level editable during runtime!
Use /peer/enginestats?newLoggingLevel=DEBUG (or TRACE or back to INFO) to change.
We now use GetOnlineAccountsV2Message in all cases, and the response will be either OnlineAccountsV2Message or OnlineAccountsV3Message depending on the version of the requesting peer.
Right now, two OnlineAccountData objects are considered equal if they have matching timestamps, signatures, and public keys. This reduces the chance of multiple versions of the same online account data from being sent around the network. The downside is that an instance containing a nonce value can be ignored due to already having an inferior OnlineAccountData instance in the list.
The current approach is this:
- Only allow new duplicate onlineAccountData to be added to the import queue if it's superior to the one we already have.
- Remove the existing, inferior data at the time of import (once the new data is considered valid).
This is only a temporary problem, and can be simplified once the additional fields in OnlineAccountsV3Message become required rather than optional.
This is currently for name registration transactions only, but can be adapted (or duplicated) for other transaction types when needed.
Note: this switches from a greater-than (>) to a greater-than-or-equal (>=) timestamp comparison, as it makes more sense this way. It shouldn't affect the previous transition since there were no REGISTER_NAME transactions at that exact timestamp.
Adapted from code originally written by catbref from before genesis, and essentially prevents syncing backwards. This needs significant testing on testnet.
It is quite likely that existing resources with both metadata and an empty chunks array will need to be republished, because this bug may have led to incorrect file deletions.
Nodes use each 30 minute period to compute the nonce for the next 30 minute period, so this should be prioritized. Once calculated, the 'current' timestamp is attempted if there is enough time. Doing it in this order avoids falling behind and then struggling to catch up.
We will need to think about how to handle node restarts, since otherwise an auto update could cause a gap in online accounts due to all nodes computing the 'next' timestamp before the 'current' one.
This doesn't require changes to the transformation of the outer Block components, since the "onlineAccountsSignatures" component is already variable length. It does however affect the encoding of the data within "onlineAccountsSignatures". New encoding becomes active once the block timestamp reaches onlineAccountsMemoryPoWTimestamp.