The BLOCK_SUMMARIES message type would differentiate between an empty response and a missing/invalid response. However, in V2, a response with empty summaries would throw a BufferUnderflowException and be treated by the caller as a null message.
This caused problems when trying to find a common block with peers that have diverged by more than 8 blocks. With V1 the caller would know to search back further (e.g. 16 blocks) but in V2 it was treated as "no response" and so the caller would give up instead of increasing the look-back threshold.
This fix will identify BLOCK_SUMMARIES_V2 messages with no content, and return an empty array of block summaries instead of a null message.
Should be enough to recover any stuck nodes, as long as they haven't diverged more than 240 blocks from the main chain.
This should hopefully fix a potential issue where peer's chain tip data becomes contaminated with other summary data, causing incorrect sync decisions.
This allows the first pass to always be served from the peer's cache of 8 summaries. This allows a maximum of 7 to be returned, because the 8th spot is needed for the parent block's signature.
Loading from the cache should speed up sync decisions, particularly when choose which peer to sync from. The greater the number of connected peers, the more significant this optimization will be. It should also reduce wasted network requests and data usage.
Adding this check prior to making a network request is a simple way to introduce the new cached summaries from BLOCK_SUMMARIES_V2 without having to rewrite a lot of the complex sync / peer comparison logic. Longer term we may want to rewrite that logic to read from the cache directly, but it doesn't make sense to introduce that level of risk at this point time, especially as the Synchronizer may be rewritten soon to prefer longer chains.
Even so, this is still quite a high risk commit so lots of testing will be needed.
The main difference here is that we now remove items from the onlineAccountsImportQueue in a batch, _after_ they have been imported. This prevents duplicates from being added to the queue in the previous time gap between them being removed and imported.
This caused large gaps with no presence data. They are removed when they expire, causing the local count to drop to zero, and the node would only start requesting them again once a peer had pushed one or more entries proactively.
Touches quite a few files because:
* Deprecate HEIGHT_V2 because it doesn't contain enough info to be fully useful during sync.
Newer peers will re-use BLOCK_SUMMARIES_V2.
* For newer peers, instead of sending / broadcasting HEIGHT_V2,
send top N block summaries instead, to avoid requests for minor reorgs.
* When responding to GET_BLOCK, and we don't actually have the requested block,
we currently send an empty BLOCK_SUMMARIES message instead of not responding,
which would cause a slow timeout in Synchronizer.
This pattern has spread to other network message response code,
so now we introduce a generic 'unknown' message type for all these cases.
* Remove PeerChainTipData class entirely and re-use BlockSummaryData instead.
* Each Peer instance used to hold PeerChainTipData - essentially single latest block summary - but now holds a List of latest block summaries.
* PeerChainTipData getter/setter methods modified for compatibility at this point in time.
* Repository methods that return BlockSummaryData (or lists of) now try to fully populate them,
including newly added block reference field.
* Re-worked Peer.canUseCommonBlockData() to be more readable
* Cherry-picked patch to Message.fromByteBuffer() to pass an empty, read-only ByteBuffer to subclass fromByteBuffer() methods, instead of null.
This allows natural use of BufferUnderflowException if a subclass tries to use read(), or hasRemaining(), etc. from an empty data-payload message.
Previously this could have caused an NPE.
It will now request online accounts every 1 minute instead of every 5 seconds, except for the first 5 minutes following a new online accounts timestamp, in which it will request every 5 seconds (referred to as the "burst" interval). It will also use the burst interval for the first 5 minutes after the node starts.
This is based on the idea that most online accounts arrive soon after a new timestamp begins, and so there is no need to request accounts so frequently after that. This should reduce data usage by a significant amount.
Once mempow is fully rolled out, the "burst" feature can be reduced or removed, since online accounts will be sent ahead of time, generally 15-30 mins prior to the new online accounts timestamp becoming active.
This closes a gap where accounts would be moved from onlineAccountsImportQueue to onlineAccountsToAdd, but not yet imported. During this time, there was nothing to stop them from being added to the import queue again, causing duplicate validations.
For regular groups, we require that the owner adds/removes the admins, so group membership is adequately checked. However for null-owned groups this check is skipped. So we need an additional condition to prevent non-group members from issuing a transaction for approval by the group admins.
* The dev group (ID 1) is owned by the null account with public key 11111111111111111111111111111111
* To regain access to otherwise blocked owner-based rules, it has different validation logic
* which applies to groups with this same null owner.
*
* The main difference is that approval is required for certain transaction types relating to
* null-owned groups. This allows existing admins to approve updates to the group (using group's
* approval threshold) instead of these actions being performed by the owner.
*
* Since these apply to all null-owned groups, this allows anyone to update their group to
* the null owner if they want to take advantage of this decentralized approval system.
*
* Currently, the affected transaction types are:
* - AddGroupAdminTransaction
* - RemoveGroupAdminTransaction
*
* This same approach could ultimately be applied to other group transactions too.
Using the feature trigger timestamp here should be much less error prone than a whole new block message version. Once mempow has been live for at least 24 hours, the feature trigger can be removed and the code cleaned up, as all online accounts signatures will use the new format from that time onwards (legacy signatures are trimmed after 24 hours).
Due to the large amount of data, it can take some time for the request to be processed and data to be transferred. It may make more sense to load the initial state from the standard API, and just use the websockets for updates.
This should fix 500 error which could prevent forced refund attempts for LTC and other chains. It wouldn't have affected any normal trade-bot refund functionality.
This allows for different share bin distribution starting at an undecided future block height. This height will correspond with the QORA reduction. New values decided in recent community vote.
This allows the QORA share percentage to be modified at different heights, based on community votes. Added unit test to simulate a reduction.
# Conflicts:
# src/test/java/org/qortal/test/minting/RewardTests.java
This rewrite may have been causing problems with connections in the network, due to peers being forgotten too easily. Reverting for now to see if it solves the problem.
This reverts commit d81071f254.
# Conflicts:
# src/main/java/org/qortal/network/Network.java
If this proves to not have any significant bad effects on re-orgs, we could consider setting these even higher or even disabling the auto disconnect by default.
This allows users to increase their default birthday if they know that no wallets were created before a certain block, to reduce sync time. It also fixed some failed unit tests that relied on transactions between blocks 1900000 and 2000000.
Using a hardcoded signature ensures that the libraries cannot be swapped out without a core auto update, which requires the standard dev team approval process.