Commit Graph

1187 Commits

Author SHA1 Message Date
CalDescent
a9a0e69ec0 Set go-live block height for share bin fix: block 399000 2021-04-26 17:19:39 +01:00
CalDescent
ea1fed2fd3 Merge branch 'block-reward-distribution-fix' 2021-04-26 17:16:14 +01:00
CalDescent
b37f2c7d7f MAXIMUM_RETRIES set to 2, as 3 retries may have been slightly too many. 2021-04-26 17:08:21 +01:00
CalDescent
0c0c5ff077 Invalidate our block summaries cache for a peer if it fails to respond with signatures when synchronizing. 2021-04-25 12:50:40 +01:00
CalDescent
e12b99d17e Invalidate our common block cache for a peer if we can't find a common block when synchronizing. 2021-04-25 09:37:32 +01:00
CalDescent
d599146c3a Cache peer block summaries to avoid duplicate requests when comparing peers. 2021-04-24 22:10:40 +01:00
CalDescent
476731a2c3 In syncToPeerChain(), only apply a partial set of peer's blocks if they are recent.
If a peer fails to reply with all requested blocks, we will now only apply the blocks we have received so far if at least one of them is recent. This should prevent or greatly reduce the scenario where our chain is taken from a recent to an outdated state due to only partially syncing with a peer. It is best to keep our chain "recent" if possible, as this ensures that the peer selection code always runs, and therefore avoids unnecessarily syncing to a random peer on an inferior chain.
2021-04-24 20:12:11 +01:00
CalDescent
1e491dd8fb MAXIMUM_RETRIES increased from 1 to 3.
Now that we are spending a lot of time to carefully select a peer to sync with, it makes sense to retry a couple more times before giving up and starting the peer selection process all over again.
2021-04-24 19:45:53 +01:00
CalDescent
ba6397b963 Improved logging, to give a clearer picture of the peer selection decisions. 2021-04-24 19:23:09 +01:00
CalDescent
3146da6aec Don't add to the inferior chain signatures list when comparing peers against each other.
In these comparisons it's easy to incorrectly identify a bad chain, as we aren't comparing the same number of blocks. It's quite common for one peer to fail to return all blocks and be marked as an inferior chain, yet we have other "good" peers on that exact same chain. In those cases we would have stopped talking to the good peers again until they received another block.

Instead of complicating the logic and keeping track of the various good chain tip signatures, it is simpler to just remove the inferior peers from this round of syncing, and re-test them in the next round, in case they are in fact superior or equal.
2021-04-24 16:43:29 +01:00
CalDescent
5643e57ede Fixed string formatting error. 2021-04-24 16:21:04 +01:00
CalDescent
f532dbe7b4 Optimized code in Synchronizer.uniqueCommonBlocks() 2021-04-24 15:22:29 +01:00
CalDescent
ec2af62b4d Fix for bug which failed to remove peers without block summaries.
The iterator was removing the peer from the "peersSharingCommonBlock" array, when it should have been removing it from the "peers" array. The result was that the bad peer would end up in the final list of good peers, and we could then sync with it when we shouldn't have.
2021-04-24 15:21:30 +01:00
CalDescent
423142d730 Tidied up RECOVERY_MODE_TIMEOUT constant, and made checkRecoveryModeForPeers() private. 2021-04-24 10:35:01 +01:00
CalDescent
bdddb526da Added recovery mode, which is designed to automatically bring back a stalled network.
The existing system was unable to resume without manual intervention if it stalled for more than 7.5 minutes. After this time, no peers would have "recent' blocks, which are prerequisites for synchronization and minting.

This new code monitors for such a situation, and enters "recovery mode" if there are no peers with recent blocks for at least 10 minutes. It also requires that there is at least one connected peer, to reduce false positives due to bad network connectivity.

Once in recovery mode, peers with no recent blocks are added back into the pool of available peers to sync with, and restrictions on minting are lifted. This should allow for peers to collaborate to bring the chain back to a "recent" block height. Once we have a peer with a recent block, the node will exit recovery mode and sync as normal.

Previously, lifting minting restrictions could have increased the risk of extra forks, however it is much less risky now that nodes no longer mint multiple blocks in a row.

In all cases, minBlockchainPeers is used, so a minimum number of connected peers is required for syncing and minting in recovery mode, too.
2021-04-23 09:21:15 +01:00
CalDescent
dbf1ed40b3 Log the parent block's signature when minting a new block, to help us keep track of the chain it's being minted on. 2021-04-19 09:33:24 +01:00
CalDescent
02ace06526 Revert "When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them."
This reverts commit c919797553.
2021-04-18 13:03:04 +01:00
CalDescent
2d2bfc0a4c Log the number of common blocks found in each search. 2021-04-18 13:02:38 +01:00
CalDescent
3c22a12cbb Experimental idea to prevent a single node signing more than one block in a row.
This could drastically reduce the number of forks being created. Currently, if a node is having problems syncing, it will continue adding to its own fork, which adds confusion to the network. With this new idea, the node would be prevented from adding to its own chain and is instead forced to wait until it has retrieved the next block from the network.

We will need to test this on the testnet very carefully. My worry is that, because all minters submit blocks, it could create a situation where the first block is submitted by everyone, and the second block is submitted by no-one, until a different candidate for the first block has been obtained from a peer. This may not be a problem at all, and could actually improve stability in a huge way, but at the same time it has the potential to introduce serious network problems if we are not careful.
2021-04-18 10:26:36 +01:00
CalDescent
3071ef2f36 Removed redundant uiLocalServers 2021-04-17 20:55:30 +01:00
CalDescent
3022cb22d6 Merge branch 'master' into prioritize-peers 2021-04-17 20:51:35 +01:00
CalDescent
e9b4a3f6b3 Automatically backup trade bot data when starting a new trade (from either side). 2021-04-17 20:45:35 +01:00
CalDescent
4312ebfcc3 Adapted the HSQLDBRepository.exportNodeLocalData() method
It now has a new parameter - keepArchivedCopy - which when set to true will cause it to rename an existing TradeBotStates.script to TradeBotStates-archive-<timestamp>.script before creating a new backup. This should avoid keys being lost if a new backup is taken after replacing the db.

In a future version we can improve this in such a way that it combines existing and new backups into a single file. This is just a "quick fix" to increase the chances of keys being recoverable after accidentally bootstrapping without a backup.
2021-04-17 20:44:57 +01:00
CalDescent
2c0e099d1c Removed wildcard import that was automatically introduced by Intellij. 2021-04-17 14:36:24 +01:00
CalDescent
b1eb02eb1d
Merge pull request #33 from QuickMythril/version-on-tooltip
add version on tooltip
2021-04-17 13:21:20 +01:00
CalDescent
c919797553 When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them.
In version 1.4.6, we would still sync with a peer even if we only received a partial number of the requested blocks/summaries. This could create a new problem, because the BlockMinter would often try and make up the difference by minting a new fork of up to 5 blocks in quick succession. This could have added to network confusion.

Longer term we may want to adjust the BlockMinter code to prevent this from taking place altogether, but in the short term I will revert this change from 1.4.6 until we have a better way.
2021-04-17 13:09:52 +01:00
CalDescent
08dacab05c Make sure to give up if we are requesting block summaries when the core needs to shut down. 2021-04-17 12:57:28 +01:00
CalDescent
2efc9218df Improved the process of selecting the next peer to sync with
Added a new step, which attempts to filter out peers that are on inferior chains, by comparing them against each other and our chain. The basic logic is as follows:

1. Take the list of peers that we'd previously have chosen from randomly.
2. Figure out our common block with each of those peers (if its within 240 blocks), using cached data if possible.
3. Remove peers with no common block.
4. Find the earliest common block, and compare all peers with that common block against each other (and against our chain) using the chain weight method. This involves fetching (up to 200) summaries from each peer after the common block, and (up to 200) summaries from our own chain after the common block.
5. If our chain was superior, remove all peers with this common block, then move up to the next common block (in ascending order), and repeat from step 4.
6. If our chain was inferior, remove any peers with lower weights, then remove all peers with higher common blocks.
7. We end up with a reduced list of peers, that should in theory be on superior or equal chains to us. Pick one of those at random and sync to it.

This is a high risk feature - we don't yet know the impact on network load. Nor do we know whether it will cause issues due to prioritising longer chains, since the chain weight algorithm currently prefers them.
2021-04-17 12:52:19 +01:00
CalDescent
41505dae11 Treat two block summaries as equal if they have matching signatures 2021-04-16 09:40:22 +01:00
CalDescent
45efe7cd56 Slight reordering of vars. 2021-04-10 18:24:33 +01:00
CalDescent
78cac7f0e6 Updated usage info to reflect the fact that the "count" parameter is optional.
Usage:

block-timings.sh <startheight> [count] [target] [deviation] [power]
2021-04-10 18:12:09 +01:00
CalDescent
a1a1b8e94a Added tools/block-timings-sh which can be used to test out new block timings (specified in blockchain.json).
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.

Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]

startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
2021-04-10 17:57:28 +01:00
CalDescent
641a658059 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-04-10 17:49:04 +01:00
CalDescent
44ec447014 Show an error in publish-auto-update.pl if both sha256sum and sha256 aren't found in PATH. 2021-04-01 08:27:56 +01:00
CalDescent
98308ecf98 Bump version to 1.4.6 2021-04-01 08:09:50 +01:00
CalDescent
8d613a6472 MAXIMUM_RETRIES reduced from 3 to 1 2021-03-30 13:07:34 +01:00
CalDescent
c3e5298ecd Added a few checks for Controller.isStopping() in synchronizer loops, to try and speed up the shutdown time. 2021-03-30 13:05:43 +01:00
CalDescent
e89d31eb5a Rewrite of Synchronizer.syncToPeerChain(), this time borrowing ideas from Synchronizer.applyNewBlocks().
Main differences / improvements:
- Only request a single batch of signatures upfront, instead of the entire peer's chain. There is no point in requesting them all, as the later ones may not be valid by the time we have finished requesting all the blocks before them.
- If we fail to fetch a block, clear any queued signatures that are in memory and re-fetch signatures after the last block received. This allows us to cope with peers that re-org whilst we are syncing with them.
- If we can't find any more block signatures, or the peer fails to respond to a block, apply our progress anyway. This should reduce wasted work and network congestion, and helps cope with larger peer re-orgs.
- The retry mechanism remains in place, but instead of fetching the same incorrect block over and over, it will attempt to locate a new block signature each time, as described above. To help reduce code complexity, block signature requests are no longer retried.
2021-03-30 12:29:27 +01:00
CalDescent
30160e2843 Fixes to allow publish-auto-update.sh to work with sha256sum versions that add trailing characters. 2021-03-21 18:15:29 +00:00
catbref
503d22e4d0 Updated Qortal.aip for WindowsInstaller for v1.4.5 2021-03-21 18:05:38 +00:00
CalDescent
b9a0d489d7 Bump version to 1.4.5 2021-03-21 17:06:10 +00:00
catbref
d9d4c4c302 Bump Peer response timeout from 2s to 3s 2021-03-21 16:17:40 +00:00
catbref
81c6d75d62 Adjust Synchronizer.MAXIMUM_BLOCK_STEP to 128, which means final summaries request will have enough to cover MAXIMUM_COMMON_DELTA (8+16+32+64+128 = 248, which is >240) 2021-03-21 16:12:41 +00:00
catbref
d1419bdfbd Minor comments, adjust max step size when searching for common block 2021-03-21 15:57:00 +00:00
CalDescent
8566d9b7e5 Merge branch 'master' into synchronization-improvements 2021-03-21 15:04:43 +00:00
catbref
b319d6db6b Rework BlockMessage caching with new pseudo outgoing-only message that only caches raw bytes 2021-03-21 14:14:15 +00:00
CalDescent
35fd1d8455 Base58 encode signatures in recently added logs. 2021-03-21 14:12:04 +00:00
CalDescent
be21771e49 Use SYNC_BATCH_SIZE instead of MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST. 2021-03-21 13:58:42 +00:00
catbref
745528a9b1 Peer.sendMessage() should return false when it can't send because it can't build the message 2021-03-21 13:19:59 +00:00
CalDescent
f1422af95b Added retry mechanisms in Synchronizer.syncToPeerChain()
Until now, we required a perfect success rate when syncing with a peer via Synchronizer.syncToPeerChain(). Blocks were requested individually, but the node would give up and lose all progress if a single request failed. In practice, this happened very regularly, and it was difficult to succeed when there were a large number of blocks (e.g. 20+) that needed to be requested.

This commit adds two retry mechanisms, causing each of the two request types (block sigs and blocks) to retry 3 times before giving up, potentially avoiding a lot of wasted work. The number of retries is configurable in the MAXIMUM_RETRIES constant, which we could move to settings at some point if this feature proves useful.

The original issue seemed to result in a few side effects:

1. Nodes would spend a large amount of time requesting blocks from peers, only to throw it all away afterwards. This potentially added to network congestion, as nodes were using unnecessary network time to unproductively serve peers.

2. A large number of sync attempts were failing, particularly when a fork emerged with a significant number of divergent blocks (20+). This issue reduced the ability for nodes to sync to the correct chain while they still had time to do so. With every block that passed, it became made it more and more difficult to switch to the correct chain. Eventually, the correct chain would become TOO_DIVERGENT at which point there is no way to automatically switch without manual intervention. I hope that this retry mechanism will increase the chances of nodes automatically moving onto the right chain quickly, avoiding the need for a user to intervene.

3. The POST /admin/forcesync API was unlikely to succeed when the peer's chain had started to diverge from the user's chain. This should increase the success rate.

Also included in this commit is a MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST constant. This limits the number of block sigs requested in each batch (default 200). Without this, we are unable to increase MAXIMUM_COMMON_DELTA because it can try and request thousands of block sigs at once, which unsurprisingly doesn't succeed.
2021-03-21 09:41:36 +00:00