Commit Graph

935 Commits

Author SHA1 Message Date
CalDescent
c8897ecf9b Rewrite of HSQLDBATRepository.getBlockATStatesAtHeight() SQL query
The previous query was taking almost half a second to run each time, whereas the new version runs 10-100x faster. This was the main bottleneck with block serialization and should therefore allow for much faster syncing once rolled out to the network. Tested several thousand blocks to ensure that the results returned by the new query match those returned by the old one.
2021-05-24 19:52:20 +01:00
CalDescent
2c8b94d469 Always use the org.qortal.utils.Base58 implementation
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
2021-05-24 19:38:01 +01:00
CalDescent
36c1cfae51 Log the P2SH address when redeeming or refunding LTC via the API. 2021-05-24 19:00:04 +01:00
CalDescent
41ad78750e Don't allow QORT addresses to be used as the receiving address when redeeming LTC
This is probably more validation than is actually needed, but given that we use the same field for LTC and QORT receiving addresses in the database, it is best to be extra careful.
2021-05-24 18:59:41 +01:00
CalDescent
3eaa4d5b38 Added /crosschain/htlc/refund/LITECOIN/{ataddress}/{receivingAddress} API
This is the same as the /crosschain/htlc/refund/LITECOIN/{ataddress} API, but allows a custom destination address to be specified.
2021-05-23 18:52:03 +01:00
CalDescent
80311355ae Added /blocks/signature/{signature}/data API
This returns serialized, base58 encoded data for the entire block. It is the same format as the data sent between nodes when synchronizing, with base58 encoding added so that it can be outputted cleanly in the API response.
2021-05-23 13:10:47 +01:00
CalDescent
39d1590ace Improved descriptions of the new API endpoints. 2021-05-22 14:16:14 +01:00
CalDescent
0b36b650a4 Added /redeem/LITECOIN/{ataddress} API
This is the equivalent of the refund API but can be used by the seller to redeem LTC from a stuck transaction, by supplying the associated AT address, There are no lockTime requirements; it is redeemable as soon as the buyer has redeemed the QORT and sent the secret to the seller.
2021-05-22 13:59:00 +01:00
CalDescent
39575e8542 Added /refund/LITECOIN/{ataddress} API
This is designed to be called by the buyer, and will force refund their P2SH transaction associated with the supplied AT. The tradebot responsible for this trade must be present in the user's db for this API access the necessary data. It must be called after lockTime has passed, which for LTC is currently 60 minutes from the time that the P2SH was funded. Trying to refund before this time will result in a FOREIGN_BLOCKCHAIN_TOO_SOON error.
2021-05-22 10:09:28 +01:00
CalDescent
326ef498b0 Added /crosschain/htlc/redeem/LITECOIN/{ataddress}/{tradePrivateKey}/{secret}/{receivingAddress} API
This can currently be used by either the buyer or the seller, but it requires the seller's trade private key & receiving address to be specified, along with the buyer's secret. Currently hardcoded to LITECOIN but I will aim to make this generic as we start adding more coins.
2021-05-22 09:51:57 +01:00
CalDescent
5148bad82e /crosschain/htlc APIs now take base58 encoded params instead of hex.
This makes them more compatible with the output of the /crosschain/tradebot and /crosschain/trade/{ataddress} APIs which is likely where most people will be retrieving data from, rather than the database itself.
2021-05-20 09:20:14 +01:00
CalDescent
518f02472f Added POST /crosschain/LitecoinACCTv1/redeemmessage API
This is similar to the BTC equivalent, but removes secretB as an input parameter. It also signs and broadcasts the transaction, because the wallet isn't needed for this. These transactions have to be signed using the tradePrivateKey from the tradebot data rather than any of the wallet's keys.

There are two other LitecoinACCTv1 APIs still to implement, but I will leave these until they are needed.
2021-05-20 07:59:19 +01:00
CalDescent
27aeb4f05f Merge branch 'master' into sync-multiple-blocks
# Conflicts:
#	src/main/java/org/qortal/controller/Synchronizer.java
2021-05-16 11:21:34 +01:00
CalDescent
13dcf7f72a Added/updated some comments relating to a possible future optimization. 2021-05-16 11:03:11 +01:00
CalDescent
65c26f17df Reduced "Error while trying to find common block with peer" log from INFO to DEBUG when determining which peer to sync with. When performing the actual synchronization, use INFO logging as this is a more serious error. 2021-05-16 10:45:40 +01:00
CalDescent
3bedba71d5 Reduced frequency and level of some synchronizer logs. 2021-05-16 10:36:41 +01:00
CalDescent
84bf570243 Added optional "maxtrades" parameter to /crosschain/price/{blockchain} API
This specifies the maximum number of trades to be used when calculating the price. Default: 10
2021-05-16 09:51:11 +01:00
CalDescent
28d50bccf9 Exclude peers if we don't have a complete set of their block summaries.
This tightens up the decision making by adding two requirements:

1. The peer must return the same number of summaries to the ones requested.
2. The peer must return a summary that matches its latest reported signature.

This ensures we are always making sync decisions based on accurate data, and removes peers that are currently mid re-org. This is probably more validation than is actually necessary, but it's best to be really thorough here so it is as optimized as possible.
2021-05-16 09:15:37 +01:00
CalDescent
66711c2e9d Require a complete sync in syncToPeerChain()
We have gone backwards and forwards on this one a lot recently, but now that stability has returned, it is best to tighten this up. Previously it was loosened to help reduce network load, but that is no longer a problem. With this stricter approach, it should prevent a node ending up in an incomplete state after syncing, which is the main cause of the shorter re-orgs we are seeing.
2021-05-16 08:45:23 +01:00
CalDescent
92d8c37d7d Added AT count to block debug logs. 2021-05-15 12:54:46 +01:00
CalDescent
5824f75669 Rework of the repository export and import functions.
The existing HSQL export/import (PERFORM EXPORT SCRIPT and PERFORM IMPORT SCRIPT) have been replaced with a custom JSON import and export. Whilst this is less generic, it has some significant advantages:

- When exporting data, it is now able to combine the exported data with any data that already exists in the backup file. This prevents a backup after a bootstrap from overwriting data from before the bootstrap, and removes the need for all of the "archive" files that we currently create.
- Adds support for partial imports, and updates. Previously an import would fail if any of the data being imported already existed in the db. It will now add new rows and update existing ones.
- The format and contents of the exported trade bot data now matches the output of the /crosschain/tradebot API.
- Data is retrieved without the need for a database lock, and therefore the export process is much faster and less invasive. This should prevent the lockups and other problems seen when using the trade portal.

For now, there are a couple of trade-offs to using this new approach:
- The minting key import/export has been temporarily removed until there is more time to transition it to this new format.
- Existing .script backups can no longer be imported using versions higher than 1.5.1.

Both of these can be solved by temporarily running version 1.5.1, performing the necessary imports/exports, then returning to the latest version. Longer term the minting keys export/import will be reimplemented using the JSON format.
2021-05-15 12:19:15 +01:00
CalDescent
d2649b237c Moved chain weight calculation log from DEBUG to TRACE. 2021-05-11 19:01:23 +01:00
CalDescent
255233fe38 Reduced log spam. 2021-05-10 09:10:51 +01:00
CalDescent
6532c258f6 Reduced log spam. 2021-05-10 09:10:14 +01:00
CalDescent
4ac3984b7c Merge branch 'master' into sync-multiple-blocks
# Conflicts:
#	src/main/java/org/qortal/settings/Settings.java
2021-05-10 09:09:41 +01:00
CalDescent
83e2b10904 Merge branch 'ignore-old-versions' 2021-05-10 09:01:04 +01:00
CalDescent
26c1793d85 Added "allowConnectionsWithOlderPeerVersions" setting (default: true)
This controls whether to allow connections with peers below minPeerVersion.

If true, we won't sync with them but they can still sync with us, and will show in the peers list. This is the default, which allows older nodes to continue functioning, but prevents them from interfering with the sync behaviour of updated nodes.

If false, sync will be blocked both ways, and they will not appear in the peers list at all.
2021-05-10 09:00:42 +01:00
CalDescent
23a9eea26b Merge branch 'ignore-old-versions' 2021-05-09 23:02:35 +01:00
CalDescent
af9b536dd9 Moved version check above getMinBlockchainPeers() check, so that nodes with old versions aren't counted. 2021-05-09 23:00:51 +01:00
CalDescent
e300a957e4 Added online accounts count to /blocks/byheight/{height}/mintinginfo API and block-timings.sh script. 2021-05-09 19:25:05 +01:00
CalDescent
f6ba5f5d51 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-05-09 19:24:25 +01:00
CalDescent
c4cbb64643 Added "minPeerVersion" setting, and avoid syncing with peers on lower versions. 2021-05-09 17:38:07 +01:00
CalDescent
8260cec713 Added "maximumCount" parameter to HSQLDBATRepository.getMatchingFinalATStatesQuorum() and use it to limit the number of ATs being returned in the query.
Initially set to 10 when used by the /crosschain/price/{blockchain} API, so that the price is based on the last 10 trades rather than every trade that has ever taken place.
2021-05-09 15:56:15 +01:00
CalDescent
428af3c0e8 Only use fast sync on trimmed blocks, as others are too large.
This could probably be improved to make sure that all blocks in the next request are within the trimmed time range, but it's enough for now.
2021-05-09 13:57:47 +01:00
CalDescent
68544715bf Skip Block.logDebugInfo() altogether if the log level is more specific than DEBUG, to avoid wasting resources. 2021-05-09 09:05:51 +01:00
CalDescent
d2ea5633fb Fixed divide by zero exception.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
2021-05-09 09:05:42 +01:00
CalDescent
f4520e2752 Skip Block.logDebugInfo() altogether if the log level is more specific than DEBUG, to avoid wasting resources. 2021-05-09 09:00:53 +01:00
CalDescent
475802afbc Fixed divide by zero exception.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
2021-05-09 08:25:24 +01:00
CalDescent
3aa9b5f0b6 Increased timeout when syncing multiple blocks from 4s to 10s. 2021-05-08 22:36:41 +01:00
CalDescent
6c5dbf7bd0 Preliminary version for fast sync set to 1.6.0, because 1.5.x is already released. 2021-05-08 16:36:42 +01:00
CalDescent
3b3dc5032b Merge branch 'master' into sync-multiple-blocks
# Conflicts:
#	pom.xml
#	src/main/java/org/qortal/controller/Synchronizer.java

Removed all fast sync code from Controller.syncToPeerChain(), so it is now the same as `master`.
2021-05-08 16:35:09 +01:00
CalDescent
fe4ae61552 Added "maxRetries" setting.
This controls the maximum number of retry attempts if a peer fails to respond with the requested data.
2021-05-06 17:49:45 +01:00
CalDescent
6e9a61c4e5 Fixed logging issue where it would underreport the number of common blocks found when loading some from the cache. 2021-05-02 20:51:53 +01:00
CalDescent
8e244fd956 Fixed yet another bug with minChainLength. 2021-05-02 20:45:20 +01:00
CalDescent
2eb6771963 Adapted logging in comparePeers() to report correct values for both chain weight algorithms. 2021-05-02 20:26:51 +01:00
CalDescent
db77108054 Log the number of blocks used in Block.calcChainWeight()
This makes it easier to check that the new consensus code is being used, and that it is working correctly.
2021-05-02 19:59:32 +01:00
CalDescent
241e2bef85 Merge branch 'master' into chain-weight-consensus
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
#	src/main/resources/blockchain.json
#	src/test/resources/test-chain-v2-founder-rewards.json
#	src/test/resources/test-chain-v2-leftover-reward.json
#	src/test/resources/test-chain-v2-minting.json
#	src/test/resources/test-chain-v2-qora-holder-extremes.json
#	src/test/resources/test-chain-v2-qora-holder.json
#	src/test/resources/test-chain-v2-reward-scaling.json
#	src/test/resources/test-chain-v2.json
2021-05-02 18:18:20 +01:00
CalDescent
fac02dbc7d Fixed bug in maxHeight parameter passed to Block.calcChainWeight()
Like the others, this one is only relevant after switching to same-length chain weight comparisons.
2021-05-02 15:56:13 +01:00
CalDescent
9ebcd55ff5 Fixed calculation error in existing chain weight code, which would have caused the last block to be missed out of the comparison after switching to same-length chain comparisons. 2021-05-01 13:34:13 +01:00
CalDescent
50244c1c40 Fixed bug which would cause other peers to not be compared against each other, if we had no blocks ourselves.
Again, this wouldn't have affected anything in 1.5.0 or before, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 13:32:16 +01:00
CalDescent
b4395fdad1 Fixed bug which could cause minChainLength to report a higher value.
This wouldn't have affected anything in 1.5.0, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 10:57:24 +01:00
CalDescent
1da8994be7 Log the block timestamp, minter level, online accounts, key distance, and weight, when orphaning or processing.
This gives an insight into the contents of each chain when doing a re-org. To enable this logging, add the following to log4j2.properties:

logger.block.name = org.qortal.block.Block
logger.block.level = debug
2021-05-01 10:24:50 +01:00
55ff1e2bb1
updated and tested BTC electrum servers (#36)
* updated electrum servers

mainnet list: https://1209k.com/bitcoin-eye/ele.php?chain=btc
testnet list: https://1209k.com/bitcoin-eye/ele.php?chain=tbtc

* removed servers

tested each mainnet server individually and removed those that did not respond
2021-05-01 09:18:46 +01:00
CalDescent
5fd8528c49 Small refactor for code readability, and added some defensiveness to avoid possible NPEs. 2021-04-29 09:04:59 +01:00
CalDescent
26d8ed783a Same as commit c0c5bf1, but for blocks as well as block summaries. 2021-04-29 08:55:16 +01:00
CalDescent
c0c5bf1591 Apply blocks in syncToPeerChain() if the latest received block is newer than our latest, and we started from an out of date chain.
This solves a common problem that is mostly seen when starting a node that has been switched off for some time, or when starting from a bootstrap. In these cases, it can be difficult get synced to the latest if you are starting from a small fork. This is because it required that the node was brought up to date via a single peer, and there wasn't much room for error if it failed to retrieve a block a couple of times. This generally caused the blocks to be thrown away and it would try the same process over and over.

The solution is to apply new blocks if the most recently received block is newer than our current latest block. This gets the node back on to the main fork where it can then sync using the regular applyNewBlocks() method.
2021-04-28 22:03:13 +01:00
CalDescent
ea1fed2fd3 Merge branch 'block-reward-distribution-fix' 2021-04-26 17:16:14 +01:00
CalDescent
b37f2c7d7f MAXIMUM_RETRIES set to 2, as 3 retries may have been slightly too many. 2021-04-26 17:08:21 +01:00
CalDescent
0c0c5ff077 Invalidate our block summaries cache for a peer if it fails to respond with signatures when synchronizing. 2021-04-25 12:50:40 +01:00
CalDescent
e12b99d17e Invalidate our common block cache for a peer if we can't find a common block when synchronizing. 2021-04-25 09:37:32 +01:00
CalDescent
d599146c3a Cache peer block summaries to avoid duplicate requests when comparing peers. 2021-04-24 22:10:40 +01:00
CalDescent
476731a2c3 In syncToPeerChain(), only apply a partial set of peer's blocks if they are recent.
If a peer fails to reply with all requested blocks, we will now only apply the blocks we have received so far if at least one of them is recent. This should prevent or greatly reduce the scenario where our chain is taken from a recent to an outdated state due to only partially syncing with a peer. It is best to keep our chain "recent" if possible, as this ensures that the peer selection code always runs, and therefore avoids unnecessarily syncing to a random peer on an inferior chain.
2021-04-24 20:12:11 +01:00
CalDescent
1e491dd8fb MAXIMUM_RETRIES increased from 1 to 3.
Now that we are spending a lot of time to carefully select a peer to sync with, it makes sense to retry a couple more times before giving up and starting the peer selection process all over again.
2021-04-24 19:45:53 +01:00
CalDescent
ba6397b963 Improved logging, to give a clearer picture of the peer selection decisions. 2021-04-24 19:23:09 +01:00
CalDescent
3146da6aec Don't add to the inferior chain signatures list when comparing peers against each other.
In these comparisons it's easy to incorrectly identify a bad chain, as we aren't comparing the same number of blocks. It's quite common for one peer to fail to return all blocks and be marked as an inferior chain, yet we have other "good" peers on that exact same chain. In those cases we would have stopped talking to the good peers again until they received another block.

Instead of complicating the logic and keeping track of the various good chain tip signatures, it is simpler to just remove the inferior peers from this round of syncing, and re-test them in the next round, in case they are in fact superior or equal.
2021-04-24 16:43:29 +01:00
CalDescent
5643e57ede Fixed string formatting error. 2021-04-24 16:21:04 +01:00
CalDescent
f532dbe7b4 Optimized code in Synchronizer.uniqueCommonBlocks() 2021-04-24 15:22:29 +01:00
CalDescent
ec2af62b4d Fix for bug which failed to remove peers without block summaries.
The iterator was removing the peer from the "peersSharingCommonBlock" array, when it should have been removing it from the "peers" array. The result was that the bad peer would end up in the final list of good peers, and we could then sync with it when we shouldn't have.
2021-04-24 15:21:30 +01:00
CalDescent
423142d730 Tidied up RECOVERY_MODE_TIMEOUT constant, and made checkRecoveryModeForPeers() private. 2021-04-24 10:35:01 +01:00
CalDescent
bdddb526da Added recovery mode, which is designed to automatically bring back a stalled network.
The existing system was unable to resume without manual intervention if it stalled for more than 7.5 minutes. After this time, no peers would have "recent' blocks, which are prerequisites for synchronization and minting.

This new code monitors for such a situation, and enters "recovery mode" if there are no peers with recent blocks for at least 10 minutes. It also requires that there is at least one connected peer, to reduce false positives due to bad network connectivity.

Once in recovery mode, peers with no recent blocks are added back into the pool of available peers to sync with, and restrictions on minting are lifted. This should allow for peers to collaborate to bring the chain back to a "recent" block height. Once we have a peer with a recent block, the node will exit recovery mode and sync as normal.

Previously, lifting minting restrictions could have increased the risk of extra forks, however it is much less risky now that nodes no longer mint multiple blocks in a row.

In all cases, minBlockchainPeers is used, so a minimum number of connected peers is required for syncing and minting in recovery mode, too.
2021-04-23 09:21:15 +01:00
CalDescent
dbf1ed40b3 Log the parent block's signature when minting a new block, to help us keep track of the chain it's being minted on. 2021-04-19 09:33:24 +01:00
CalDescent
02ace06526 Revert "When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them."
This reverts commit c919797553.
2021-04-18 13:03:04 +01:00
CalDescent
2d2bfc0a4c Log the number of common blocks found in each search. 2021-04-18 13:02:38 +01:00
CalDescent
3c22a12cbb Experimental idea to prevent a single node signing more than one block in a row.
This could drastically reduce the number of forks being created. Currently, if a node is having problems syncing, it will continue adding to its own fork, which adds confusion to the network. With this new idea, the node would be prevented from adding to its own chain and is instead forced to wait until it has retrieved the next block from the network.

We will need to test this on the testnet very carefully. My worry is that, because all minters submit blocks, it could create a situation where the first block is submitted by everyone, and the second block is submitted by no-one, until a different candidate for the first block has been obtained from a peer. This may not be a problem at all, and could actually improve stability in a huge way, but at the same time it has the potential to introduce serious network problems if we are not careful.
2021-04-18 10:26:36 +01:00
CalDescent
3071ef2f36 Removed redundant uiLocalServers 2021-04-17 20:55:30 +01:00
CalDescent
3022cb22d6 Merge branch 'master' into prioritize-peers 2021-04-17 20:51:35 +01:00
CalDescent
e9b4a3f6b3 Automatically backup trade bot data when starting a new trade (from either side). 2021-04-17 20:45:35 +01:00
CalDescent
4312ebfcc3 Adapted the HSQLDBRepository.exportNodeLocalData() method
It now has a new parameter - keepArchivedCopy - which when set to true will cause it to rename an existing TradeBotStates.script to TradeBotStates-archive-<timestamp>.script before creating a new backup. This should avoid keys being lost if a new backup is taken after replacing the db.

In a future version we can improve this in such a way that it combines existing and new backups into a single file. This is just a "quick fix" to increase the chances of keys being recoverable after accidentally bootstrapping without a backup.
2021-04-17 20:44:57 +01:00
CalDescent
2c0e099d1c Removed wildcard import that was automatically introduced by Intellij. 2021-04-17 14:36:24 +01:00
CalDescent
b1eb02eb1d
Merge pull request #33 from QuickMythril/version-on-tooltip
add version on tooltip
2021-04-17 13:21:20 +01:00
CalDescent
c919797553 When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them.
In version 1.4.6, we would still sync with a peer even if we only received a partial number of the requested blocks/summaries. This could create a new problem, because the BlockMinter would often try and make up the difference by minting a new fork of up to 5 blocks in quick succession. This could have added to network confusion.

Longer term we may want to adjust the BlockMinter code to prevent this from taking place altogether, but in the short term I will revert this change from 1.4.6 until we have a better way.
2021-04-17 13:09:52 +01:00
CalDescent
08dacab05c Make sure to give up if we are requesting block summaries when the core needs to shut down. 2021-04-17 12:57:28 +01:00
CalDescent
2efc9218df Improved the process of selecting the next peer to sync with
Added a new step, which attempts to filter out peers that are on inferior chains, by comparing them against each other and our chain. The basic logic is as follows:

1. Take the list of peers that we'd previously have chosen from randomly.
2. Figure out our common block with each of those peers (if its within 240 blocks), using cached data if possible.
3. Remove peers with no common block.
4. Find the earliest common block, and compare all peers with that common block against each other (and against our chain) using the chain weight method. This involves fetching (up to 200) summaries from each peer after the common block, and (up to 200) summaries from our own chain after the common block.
5. If our chain was superior, remove all peers with this common block, then move up to the next common block (in ascending order), and repeat from step 4.
6. If our chain was inferior, remove any peers with lower weights, then remove all peers with higher common blocks.
7. We end up with a reduced list of peers, that should in theory be on superior or equal chains to us. Pick one of those at random and sync to it.

This is a high risk feature - we don't yet know the impact on network load. Nor do we know whether it will cause issues due to prioritising longer chains, since the chain weight algorithm currently prefers them.
2021-04-17 12:52:19 +01:00
CalDescent
41505dae11 Treat two block summaries as equal if they have matching signatures 2021-04-16 09:40:22 +01:00
CalDescent
8d613a6472 MAXIMUM_RETRIES reduced from 3 to 1 2021-03-30 13:07:34 +01:00
CalDescent
c3e5298ecd Added a few checks for Controller.isStopping() in synchronizer loops, to try and speed up the shutdown time. 2021-03-30 13:05:43 +01:00
CalDescent
e89d31eb5a Rewrite of Synchronizer.syncToPeerChain(), this time borrowing ideas from Synchronizer.applyNewBlocks().
Main differences / improvements:
- Only request a single batch of signatures upfront, instead of the entire peer's chain. There is no point in requesting them all, as the later ones may not be valid by the time we have finished requesting all the blocks before them.
- If we fail to fetch a block, clear any queued signatures that are in memory and re-fetch signatures after the last block received. This allows us to cope with peers that re-org whilst we are syncing with them.
- If we can't find any more block signatures, or the peer fails to respond to a block, apply our progress anyway. This should reduce wasted work and network congestion, and helps cope with larger peer re-orgs.
- The retry mechanism remains in place, but instead of fetching the same incorrect block over and over, it will attempt to locate a new block signature each time, as described above. To help reduce code complexity, block signature requests are no longer retried.
2021-03-30 12:29:27 +01:00
CalDescent
08f3d653cc Added new settings "maxBlocksPerRequest" and "maxBlocksPerResponse", to control the number of blocks requested and returned by nodes when using GetBlocksMessage and BlocksMessage. 2021-03-28 17:27:04 +01:00
CalDescent
f2bbafe6c2 Added missing break statement if a peer fails to respond with blocks when resolving a fork via fast sync. 2021-03-28 16:52:15 +01:00
CalDescent
cb80280eaf Bump Peer response timeout from 3s to 4s 2021-03-28 14:47:57 +01:00
CalDescent
f22f954ae3 Use MAXIMUM_BLOCKS_REQUEST_SIZE for GetBlocksMessage and BlockMessage, instead of MAXIMUM_REQUEST_SIZE.
Currently set to 1, as serialization of the BlocksMessage data on mainnet is too slow to use this for any significant number of blocks right now. Hopefully we can find a way to optimise this process, which will allow us to use this for multiple block syncing.

Until then, sticking with single blocks should still be enough to help solve the network congestion and re-orgs we are seeing, because it gives us the ability to request the next block based on the previous block's signature, which was unavailable using GET_BLOCK. This removes the requirement to fetch all block signatures upfront, and therefore it shouldn't matter if the peer does a partial re-org whilst a node is syncing to it.
2021-03-28 14:35:47 +01:00
CalDescent
2556855bd7 Added missing return statement if a peer fails to respond with blocks when fast syncing. 2021-03-28 14:19:53 +01:00
CalDescent
365662a2af MAXIMUM_RETRIES reduced from 3 to 1. It will now only retry once, which should save around 6 seconds of wasted synchronization time if a node is unable to respond with the requested block (due to a re-org, etc). 2021-03-27 19:25:27 +00:00
CalDescent
3e0ff7f43f Split Synchronizer.applyNewBlocks() into two methods.
If fast syncing is enabled in the settings (by default it's disabled) AND the peer is running at least v1.5.0, then it will route through to a new method which fetches multiple blocks at a time, in a very similar way to Synchronizer.syncToPeerChain().
If fast syncing is disabled in the settings, or we are communicating with a peer on an older version, it will continue to sync blocks individually.
2021-03-27 18:04:47 +00:00
CalDescent
8c3753326f Check "isFastSyncEnabledWhenResolvingFork" setting before requesting multiple blocks from a peer. This allows users to opt out of this functionality, by setting it to false in their settings. 2021-03-27 18:00:39 +00:00
CalDescent
dbcf6de2d5 Added new settings "fastSyncEnabled" (default: false) and "fastSyncEnabledWhenResolvingFork" (default: true). 2021-03-27 17:59:23 +00:00
CalDescent
270ac88b51 Added GetBlocksMessage and BlocksMessage, which allow multiple blocks to be transferred between peers in a single request.
When communicating with a peer that is running at least version 1.5.0, it will now sync multiple blocks at once in Synchronizer.syncToPeerChain(). This allows us to bypass the single block syncing (and retry mechanism), which has proven to be unviable when there are multiple active forks with several blocks in each chain.

For peers below v1.5.0, the logic should remain unaffected and it will continue to sync blocks individually.
2021-03-27 15:45:15 +00:00
catbref
d9d4c4c302 Bump Peer response timeout from 2s to 3s 2021-03-21 16:17:40 +00:00
catbref
81c6d75d62 Adjust Synchronizer.MAXIMUM_BLOCK_STEP to 128, which means final summaries request will have enough to cover MAXIMUM_COMMON_DELTA (8+16+32+64+128 = 248, which is >240) 2021-03-21 16:12:41 +00:00
catbref
d1419bdfbd Minor comments, adjust max step size when searching for common block 2021-03-21 15:57:00 +00:00