Compare commits

...

154 Commits

Author SHA1 Message Date
CalDescent
cc6ac4c9d9 Bump version to 1.5.3 2021-05-31 17:36:21 +01:00
CalDescent
815934ff5c Added GET /crosschain/htlc/redeemAll/LITECOIN API
This loops through all sell orders and attempts to redeem the LTC from each one. It will return true if at least one was redeemed, or false if none are available to be redeemed. Details are logged to the log.txt file rather than returned in the API response.
2021-05-29 19:43:08 +01:00
CalDescent
5a84016a91 Merge pull request #39 from szisti/networking
Code formatting, connection age, and logging changes for networking
2021-05-28 10:09:07 +01:00
Istvan Szabo
bb0269f484 Converted time format 2021-05-28 08:53:01 +01:00
Istvan Szabo
1adc9349fc Added connection age to connected peers dto 2021-05-28 08:04:57 +01:00
Istvan Szabo
06215c83f2 Reduced log levels 2021-05-27 10:48:17 +01:00
Istvan Szabo
8a828137ee Removed code coverage report as it seems to conflict with tests randomly 2021-05-27 09:52:33 +01:00
Istvan Szabo
de4b1c8f09 Removed missed functional change 2021-05-27 09:15:32 +01:00
Istvan Szabo
265d40f04a Code formatting and logging changes for networking 2021-05-27 09:03:18 +01:00
szisti
b64e52c0c0 Automated testing (#38)
* added basic workflow

* Testing workflow

* renamed workflow file

* Disabled extremely slow test

* Disabled currently failing tests

* Added jacoco and updated workflow

* We cannot run gui tests headless

* Fixed jacoco configuration

* Updated job name in the workflow

* Adjusting workflow

* Testing maven caching

* Added logging for one of the jacoco related issues

* Updated coverage logging

Co-authored-by: Istvan Szabo <istvan.szabo@betvictor.com>
2021-05-26 11:27:46 +01:00
CalDescent
ac02e5c0a6 Merge pull request #37 from szisti/fix-cross-transaction-display
Fix cross chain transaction display
2021-05-26 08:39:51 +01:00
Istvan Szabo
427a415fbf Adjusted bitcoiny to convert transaction info into the new DTO 2021-05-25 23:57:54 +01:00
Istvan Szabo
9a3414aaa7 Added new DTO to store the data 2021-05-25 23:55:12 +01:00
CalDescent
c8897ecf9b Rewrite of HSQLDBATRepository.getBlockATStatesAtHeight() SQL query
The previous query was taking almost half a second to run each time, whereas the new version runs 10-100x faster. This was the main bottleneck with block serialization and should therefore allow for much faster syncing once rolled out to the network. Tested several thousand blocks to ensure that the results returned by the new query match those returned by the old one.
2021-05-24 19:52:20 +01:00
CalDescent
2c8b94d469 Always use the org.qortal.utils.Base58 implementation
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
2021-05-24 19:38:01 +01:00
CalDescent
36c1cfae51 Log the P2SH address when redeeming or refunding LTC via the API. 2021-05-24 19:00:04 +01:00
CalDescent
41ad78750e Don't allow QORT addresses to be used as the receiving address when redeeming LTC
This is probably more validation than is actually needed, but given that we use the same field for LTC and QORT receiving addresses in the database, it is best to be extra careful.
2021-05-24 18:59:41 +01:00
CalDescent
3eaa4d5b38 Added /crosschain/htlc/refund/LITECOIN/{ataddress}/{receivingAddress} API
This is the same as the /crosschain/htlc/refund/LITECOIN/{ataddress} API, but allows a custom destination address to be specified.
2021-05-23 18:52:03 +01:00
CalDescent
35176f9550 Added other files to .gitignore 2021-05-23 16:57:09 +01:00
CalDescent
eb2c7268ea Removed .DS_Store files. 2021-05-23 15:31:26 +01:00
CalDescent
80311355ae Added /blocks/signature/{signature}/data API
This returns serialized, base58 encoded data for the entire block. It is the same format as the data sent between nodes when synchronizing, with base58 encoding added so that it can be outputted cleanly in the API response.
2021-05-23 13:10:47 +01:00
CalDescent
39d1590ace Improved descriptions of the new API endpoints. 2021-05-22 14:16:14 +01:00
CalDescent
0b36b650a4 Added /redeem/LITECOIN/{ataddress} API
This is the equivalent of the refund API but can be used by the seller to redeem LTC from a stuck transaction, by supplying the associated AT address, There are no lockTime requirements; it is redeemable as soon as the buyer has redeemed the QORT and sent the secret to the seller.
2021-05-22 13:59:00 +01:00
CalDescent
39575e8542 Added /refund/LITECOIN/{ataddress} API
This is designed to be called by the buyer, and will force refund their P2SH transaction associated with the supplied AT. The tradebot responsible for this trade must be present in the user's db for this API access the necessary data. It must be called after lockTime has passed, which for LTC is currently 60 minutes from the time that the P2SH was funded. Trying to refund before this time will result in a FOREIGN_BLOCKCHAIN_TOO_SOON error.
2021-05-22 10:09:28 +01:00
CalDescent
326ef498b0 Added /crosschain/htlc/redeem/LITECOIN/{ataddress}/{tradePrivateKey}/{secret}/{receivingAddress} API
This can currently be used by either the buyer or the seller, but it requires the seller's trade private key & receiving address to be specified, along with the buyer's secret. Currently hardcoded to LITECOIN but I will aim to make this generic as we start adding more coins.
2021-05-22 09:51:57 +01:00
CalDescent
5148bad82e /crosschain/htlc APIs now take base58 encoded params instead of hex.
This makes them more compatible with the output of the /crosschain/tradebot and /crosschain/trade/{ataddress} APIs which is likely where most people will be retrieving data from, rather than the database itself.
2021-05-20 09:20:14 +01:00
CalDescent
518f02472f Added POST /crosschain/LitecoinACCTv1/redeemmessage API
This is similar to the BTC equivalent, but removes secretB as an input parameter. It also signs and broadcasts the transaction, because the wallet isn't needed for this. These transactions have to be signed using the tradePrivateKey from the tradebot data rather than any of the wallet's keys.

There are two other LitecoinACCTv1 APIs still to implement, but I will leave these until they are needed.
2021-05-20 07:59:19 +01:00
CalDescent
ee5a132eb2 Updated AdvancedInstaller project for v1.5.2 2021-05-17 20:31:28 +01:00
CalDescent
654dc5bff3 Bump version to 1.5.2 2021-05-17 17:02:38 +01:00
CalDescent
13dcf7f72a Added/updated some comments relating to a possible future optimization. 2021-05-16 11:03:11 +01:00
CalDescent
65c26f17df Reduced "Error while trying to find common block with peer" log from INFO to DEBUG when determining which peer to sync with. When performing the actual synchronization, use INFO logging as this is a more serious error. 2021-05-16 10:45:40 +01:00
CalDescent
3bedba71d5 Reduced frequency and level of some synchronizer logs. 2021-05-16 10:36:41 +01:00
CalDescent
1ba64d9745 Bumped bitcoinj version from 0.15.6 to 0.15.10 2021-05-16 10:00:28 +01:00
CalDescent
84bf570243 Added optional "maxtrades" parameter to /crosschain/price/{blockchain} API
This specifies the maximum number of trades to be used when calculating the price. Default: 10
2021-05-16 09:51:11 +01:00
CalDescent
28d50bccf9 Exclude peers if we don't have a complete set of their block summaries.
This tightens up the decision making by adding two requirements:

1. The peer must return the same number of summaries to the ones requested.
2. The peer must return a summary that matches its latest reported signature.

This ensures we are always making sync decisions based on accurate data, and removes peers that are currently mid re-org. This is probably more validation than is actually necessary, but it's best to be really thorough here so it is as optimized as possible.
2021-05-16 09:15:37 +01:00
CalDescent
66711c2e9d Require a complete sync in syncToPeerChain()
We have gone backwards and forwards on this one a lot recently, but now that stability has returned, it is best to tighten this up. Previously it was loosened to help reduce network load, but that is no longer a problem. With this stricter approach, it should prevent a node ending up in an incomplete state after syncing, which is the main cause of the shorter re-orgs we are seeing.
2021-05-16 08:45:23 +01:00
CalDescent
92d8c37d7d Added AT count to block debug logs. 2021-05-15 12:54:46 +01:00
CalDescent
5824f75669 Rework of the repository export and import functions.
The existing HSQL export/import (PERFORM EXPORT SCRIPT and PERFORM IMPORT SCRIPT) have been replaced with a custom JSON import and export. Whilst this is less generic, it has some significant advantages:

- When exporting data, it is now able to combine the exported data with any data that already exists in the backup file. This prevents a backup after a bootstrap from overwriting data from before the bootstrap, and removes the need for all of the "archive" files that we currently create.
- Adds support for partial imports, and updates. Previously an import would fail if any of the data being imported already existed in the db. It will now add new rows and update existing ones.
- The format and contents of the exported trade bot data now matches the output of the /crosschain/tradebot API.
- Data is retrieved without the need for a database lock, and therefore the export process is much faster and less invasive. This should prevent the lockups and other problems seen when using the trade portal.

For now, there are a couple of trade-offs to using this new approach:
- The minting key import/export has been temporarily removed until there is more time to transition it to this new format.
- Existing .script backups can no longer be imported using versions higher than 1.5.1.

Both of these can be solved by temporarily running version 1.5.1, performing the necessary imports/exports, then returning to the latest version. Longer term the minting keys export/import will be reimplemented using the JSON format.
2021-05-15 12:19:15 +01:00
CalDescent
deb8adafc9 Added org.json dependency.
The com.googlecode.json-simple dependency we use in other parts of the project isn't ideal for some of the more complex parsing.
2021-05-15 09:15:29 +01:00
CalDescent
d2649b237c Moved chain weight calculation log from DEBUG to TRACE. 2021-05-11 19:01:23 +01:00
CalDescent
6532c258f6 Reduced log spam. 2021-05-10 09:10:14 +01:00
CalDescent
83e2b10904 Merge branch 'ignore-old-versions' 2021-05-10 09:01:04 +01:00
CalDescent
26c1793d85 Added "allowConnectionsWithOlderPeerVersions" setting (default: true)
This controls whether to allow connections with peers below minPeerVersion.

If true, we won't sync with them but they can still sync with us, and will show in the peers list. This is the default, which allows older nodes to continue functioning, but prevents them from interfering with the sync behaviour of updated nodes.

If false, sync will be blocked both ways, and they will not appear in the peers list at all.
2021-05-10 09:00:42 +01:00
CalDescent
23a9eea26b Merge branch 'ignore-old-versions' 2021-05-09 23:02:35 +01:00
CalDescent
af9b536dd9 Moved version check above getMinBlockchainPeers() check, so that nodes with old versions aren't counted. 2021-05-09 23:00:51 +01:00
CalDescent
e4874f86f9 Merge branch 'block-timings' of github.com:Qortal/qortal into block-timings
# Conflicts:
#	src/main/java/org/qortal/api/model/BlockMintingInfo.java
#	src/main/java/org/qortal/api/resource/BlocksResource.java
#	tools/block-timings.sh
2021-05-09 19:25:33 +01:00
CalDescent
e300a957e4 Added online accounts count to /blocks/byheight/{height}/mintinginfo API and block-timings.sh script. 2021-05-09 19:25:05 +01:00
CalDescent
1c38afcd25 Slight reordering of vars. 2021-05-09 19:24:25 +01:00
CalDescent
a06faa7685 Updated usage info to reflect the fact that the "count" parameter is optional.
Usage:

block-timings.sh <startheight> [count] [target] [deviation] [power]
2021-05-09 19:24:25 +01:00
CalDescent
019ab2b21d Added tools/block-timings-sh which can be used to test out new block timings (specified in blockchain.json).
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.

Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]

startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
2021-05-09 19:24:25 +01:00
CalDescent
f6ba5f5d51 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-05-09 19:24:25 +01:00
CalDescent
c4cbb64643 Added "minPeerVersion" setting, and avoid syncing with peers on lower versions. 2021-05-09 17:38:07 +01:00
CalDescent
8260cec713 Added "maximumCount" parameter to HSQLDBATRepository.getMatchingFinalATStatesQuorum() and use it to limit the number of ATs being returned in the query.
Initially set to 10 when used by the /crosschain/price/{blockchain} API, so that the price is based on the last 10 trades rather than every trade that has ever taken place.
2021-05-09 15:56:15 +01:00
CalDescent
f4520e2752 Skip Block.logDebugInfo() altogether if the log level is more specific than DEBUG, to avoid wasting resources. 2021-05-09 09:00:53 +01:00
CalDescent
475802afbc Fixed divide by zero exception.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
2021-05-09 08:25:24 +01:00
Tom
a170668d9d Updated AdvancedInstaller project for v1.5.1 2021-05-07 09:58:15 +01:00
Tom
f8dac39076 Updated AdvancedInstaller project for v1.5.0
This includes updating AdoptOpenJDK to version 11.0.11.9, because 11.0.6.10 is no longer recommended or available in their archive. It also looks like I am using a newer version of AdvancedInstaller itself.
2021-05-07 09:40:38 +01:00
CalDescent
fe4ae61552 Added "maxRetries" setting.
This controls the maximum number of retry attempts if a peer fails to respond with the requested data.
2021-05-06 17:49:45 +01:00
CalDescent
0c3597f757 Bump version to 1.5.1 2021-05-05 18:41:05 +01:00
CalDescent
6109bdeafe Set go-live timestamp for same-length chain weight consensus: 1620579600000 2021-05-05 18:40:07 +01:00
CalDescent
6e9a61c4e5 Fixed logging issue where it would underreport the number of common blocks found when loading some from the cache. 2021-05-02 20:51:53 +01:00
CalDescent
8e244fd956 Fixed yet another bug with minChainLength. 2021-05-02 20:45:20 +01:00
CalDescent
2eb6771963 Adapted logging in comparePeers() to report correct values for both chain weight algorithms. 2021-05-02 20:26:51 +01:00
CalDescent
db77108054 Log the number of blocks used in Block.calcChainWeight()
This makes it easier to check that the new consensus code is being used, and that it is working correctly.
2021-05-02 19:59:32 +01:00
CalDescent
241e2bef85 Merge branch 'master' into chain-weight-consensus
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
#	src/main/resources/blockchain.json
#	src/test/resources/test-chain-v2-founder-rewards.json
#	src/test/resources/test-chain-v2-leftover-reward.json
#	src/test/resources/test-chain-v2-minting.json
#	src/test/resources/test-chain-v2-qora-holder-extremes.json
#	src/test/resources/test-chain-v2-qora-holder.json
#	src/test/resources/test-chain-v2-reward-scaling.json
#	src/test/resources/test-chain-v2.json
2021-05-02 18:18:20 +01:00
CalDescent
fac02dbc7d Fixed bug in maxHeight parameter passed to Block.calcChainWeight()
Like the others, this one is only relevant after switching to same-length chain weight comparisons.
2021-05-02 15:56:13 +01:00
CalDescent
9ebcd55ff5 Fixed calculation error in existing chain weight code, which would have caused the last block to be missed out of the comparison after switching to same-length chain comparisons. 2021-05-01 13:34:13 +01:00
CalDescent
50244c1c40 Fixed bug which would cause other peers to not be compared against each other, if we had no blocks ourselves.
Again, this wouldn't have affected anything in 1.5.0 or before, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 13:32:16 +01:00
CalDescent
b4395fdad1 Fixed bug which could cause minChainLength to report a higher value.
This wouldn't have affected anything in 1.5.0, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 10:57:24 +01:00
CalDescent
1da8994be7 Log the block timestamp, minter level, online accounts, key distance, and weight, when orphaning or processing.
This gives an insight into the contents of each chain when doing a re-org. To enable this logging, add the following to log4j2.properties:

logger.block.name = org.qortal.block.Block
logger.block.level = debug
2021-05-01 10:24:50 +01:00
QuickMythril
55ff1e2bb1 updated and tested BTC electrum servers (#36)
* updated electrum servers

mainnet list: https://1209k.com/bitcoin-eye/ele.php?chain=btc
testnet list: https://1209k.com/bitcoin-eye/ele.php?chain=tbtc

* removed servers

tested each mainnet server individually and removed those that did not respond
2021-05-01 09:18:46 +01:00
CalDescent
5fd8528c49 Small refactor for code readability, and added some defensiveness to avoid possible NPEs. 2021-04-29 09:04:59 +01:00
CalDescent
26d8ed783a Same as commit c0c5bf1, but for blocks as well as block summaries. 2021-04-29 08:55:16 +01:00
CalDescent
c0c5bf1591 Apply blocks in syncToPeerChain() if the latest received block is newer than our latest, and we started from an out of date chain.
This solves a common problem that is mostly seen when starting a node that has been switched off for some time, or when starting from a bootstrap. In these cases, it can be difficult get synced to the latest if you are starting from a small fork. This is because it required that the node was brought up to date via a single peer, and there wasn't much room for error if it failed to retrieve a block a couple of times. This generally caused the blocks to be thrown away and it would try the same process over and over.

The solution is to apply new blocks if the most recently received block is newer than our current latest block. This gets the node back on to the main fork where it can then sync using the regular applyNewBlocks() method.
2021-04-28 22:03:13 +01:00
CalDescent
c17a481b74 Bump version to 1.5.0 2021-04-26 18:34:01 +01:00
CalDescent
a9a0e69ec0 Set go-live block height for share bin fix: block 399000 2021-04-26 17:19:39 +01:00
CalDescent
ea1fed2fd3 Merge branch 'block-reward-distribution-fix' 2021-04-26 17:16:14 +01:00
CalDescent
b37f2c7d7f MAXIMUM_RETRIES set to 2, as 3 retries may have been slightly too many. 2021-04-26 17:08:21 +01:00
CalDescent
0c0c5ff077 Invalidate our block summaries cache for a peer if it fails to respond with signatures when synchronizing. 2021-04-25 12:50:40 +01:00
CalDescent
e12b99d17e Invalidate our common block cache for a peer if we can't find a common block when synchronizing. 2021-04-25 09:37:32 +01:00
CalDescent
d599146c3a Cache peer block summaries to avoid duplicate requests when comparing peers. 2021-04-24 22:10:40 +01:00
CalDescent
476731a2c3 In syncToPeerChain(), only apply a partial set of peer's blocks if they are recent.
If a peer fails to reply with all requested blocks, we will now only apply the blocks we have received so far if at least one of them is recent. This should prevent or greatly reduce the scenario where our chain is taken from a recent to an outdated state due to only partially syncing with a peer. It is best to keep our chain "recent" if possible, as this ensures that the peer selection code always runs, and therefore avoids unnecessarily syncing to a random peer on an inferior chain.
2021-04-24 20:12:11 +01:00
CalDescent
1e491dd8fb MAXIMUM_RETRIES increased from 1 to 3.
Now that we are spending a lot of time to carefully select a peer to sync with, it makes sense to retry a couple more times before giving up and starting the peer selection process all over again.
2021-04-24 19:45:53 +01:00
CalDescent
ba6397b963 Improved logging, to give a clearer picture of the peer selection decisions. 2021-04-24 19:23:09 +01:00
CalDescent
3146da6aec Don't add to the inferior chain signatures list when comparing peers against each other.
In these comparisons it's easy to incorrectly identify a bad chain, as we aren't comparing the same number of blocks. It's quite common for one peer to fail to return all blocks and be marked as an inferior chain, yet we have other "good" peers on that exact same chain. In those cases we would have stopped talking to the good peers again until they received another block.

Instead of complicating the logic and keeping track of the various good chain tip signatures, it is simpler to just remove the inferior peers from this round of syncing, and re-test them in the next round, in case they are in fact superior or equal.
2021-04-24 16:43:29 +01:00
CalDescent
5643e57ede Fixed string formatting error. 2021-04-24 16:21:04 +01:00
CalDescent
f532dbe7b4 Optimized code in Synchronizer.uniqueCommonBlocks() 2021-04-24 15:22:29 +01:00
CalDescent
ec2af62b4d Fix for bug which failed to remove peers without block summaries.
The iterator was removing the peer from the "peersSharingCommonBlock" array, when it should have been removing it from the "peers" array. The result was that the bad peer would end up in the final list of good peers, and we could then sync with it when we shouldn't have.
2021-04-24 15:21:30 +01:00
CalDescent
423142d730 Tidied up RECOVERY_MODE_TIMEOUT constant, and made checkRecoveryModeForPeers() private. 2021-04-24 10:35:01 +01:00
CalDescent
bdddb526da Added recovery mode, which is designed to automatically bring back a stalled network.
The existing system was unable to resume without manual intervention if it stalled for more than 7.5 minutes. After this time, no peers would have "recent' blocks, which are prerequisites for synchronization and minting.

This new code monitors for such a situation, and enters "recovery mode" if there are no peers with recent blocks for at least 10 minutes. It also requires that there is at least one connected peer, to reduce false positives due to bad network connectivity.

Once in recovery mode, peers with no recent blocks are added back into the pool of available peers to sync with, and restrictions on minting are lifted. This should allow for peers to collaborate to bring the chain back to a "recent" block height. Once we have a peer with a recent block, the node will exit recovery mode and sync as normal.

Previously, lifting minting restrictions could have increased the risk of extra forks, however it is much less risky now that nodes no longer mint multiple blocks in a row.

In all cases, minBlockchainPeers is used, so a minimum number of connected peers is required for syncing and minting in recovery mode, too.
2021-04-23 09:21:15 +01:00
CalDescent
dbf1ed40b3 Log the parent block's signature when minting a new block, to help us keep track of the chain it's being minted on. 2021-04-19 09:33:24 +01:00
CalDescent
02ace06526 Revert "When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them."
This reverts commit c919797553.
2021-04-18 13:03:04 +01:00
CalDescent
2d2bfc0a4c Log the number of common blocks found in each search. 2021-04-18 13:02:38 +01:00
CalDescent
3c22a12cbb Experimental idea to prevent a single node signing more than one block in a row.
This could drastically reduce the number of forks being created. Currently, if a node is having problems syncing, it will continue adding to its own fork, which adds confusion to the network. With this new idea, the node would be prevented from adding to its own chain and is instead forced to wait until it has retrieved the next block from the network.

We will need to test this on the testnet very carefully. My worry is that, because all minters submit blocks, it could create a situation where the first block is submitted by everyone, and the second block is submitted by no-one, until a different candidate for the first block has been obtained from a peer. This may not be a problem at all, and could actually improve stability in a huge way, but at the same time it has the potential to introduce serious network problems if we are not careful.
2021-04-18 10:26:36 +01:00
CalDescent
3071ef2f36 Removed redundant uiLocalServers 2021-04-17 20:55:30 +01:00
CalDescent
3022cb22d6 Merge branch 'master' into prioritize-peers 2021-04-17 20:51:35 +01:00
CalDescent
e9b4a3f6b3 Automatically backup trade bot data when starting a new trade (from either side). 2021-04-17 20:45:35 +01:00
CalDescent
4312ebfcc3 Adapted the HSQLDBRepository.exportNodeLocalData() method
It now has a new parameter - keepArchivedCopy - which when set to true will cause it to rename an existing TradeBotStates.script to TradeBotStates-archive-<timestamp>.script before creating a new backup. This should avoid keys being lost if a new backup is taken after replacing the db.

In a future version we can improve this in such a way that it combines existing and new backups into a single file. This is just a "quick fix" to increase the chances of keys being recoverable after accidentally bootstrapping without a backup.
2021-04-17 20:44:57 +01:00
CalDescent
2c0e099d1c Removed wildcard import that was automatically introduced by Intellij. 2021-04-17 14:36:24 +01:00
CalDescent
b1eb02eb1d Merge pull request #33 from QuickMythril/version-on-tooltip
add version on tooltip
2021-04-17 13:21:20 +01:00
CalDescent
c919797553 When syncing to a peer on a different fork, ensure that all blocks are obtained before applying them.
In version 1.4.6, we would still sync with a peer even if we only received a partial number of the requested blocks/summaries. This could create a new problem, because the BlockMinter would often try and make up the difference by minting a new fork of up to 5 blocks in quick succession. This could have added to network confusion.

Longer term we may want to adjust the BlockMinter code to prevent this from taking place altogether, but in the short term I will revert this change from 1.4.6 until we have a better way.
2021-04-17 13:09:52 +01:00
CalDescent
08dacab05c Make sure to give up if we are requesting block summaries when the core needs to shut down. 2021-04-17 12:57:28 +01:00
CalDescent
2efc9218df Improved the process of selecting the next peer to sync with
Added a new step, which attempts to filter out peers that are on inferior chains, by comparing them against each other and our chain. The basic logic is as follows:

1. Take the list of peers that we'd previously have chosen from randomly.
2. Figure out our common block with each of those peers (if its within 240 blocks), using cached data if possible.
3. Remove peers with no common block.
4. Find the earliest common block, and compare all peers with that common block against each other (and against our chain) using the chain weight method. This involves fetching (up to 200) summaries from each peer after the common block, and (up to 200) summaries from our own chain after the common block.
5. If our chain was superior, remove all peers with this common block, then move up to the next common block (in ascending order), and repeat from step 4.
6. If our chain was inferior, remove any peers with lower weights, then remove all peers with higher common blocks.
7. We end up with a reduced list of peers, that should in theory be on superior or equal chains to us. Pick one of those at random and sync to it.

This is a high risk feature - we don't yet know the impact on network load. Nor do we know whether it will cause issues due to prioritising longer chains, since the chain weight algorithm currently prefers them.
2021-04-17 12:52:19 +01:00
CalDescent
41505dae11 Treat two block summaries as equal if they have matching signatures 2021-04-16 09:40:22 +01:00
CalDescent
45efe7cd56 Slight reordering of vars. 2021-04-10 18:24:33 +01:00
CalDescent
78cac7f0e6 Updated usage info to reflect the fact that the "count" parameter is optional.
Usage:

block-timings.sh <startheight> [count] [target] [deviation] [power]
2021-04-10 18:12:09 +01:00
CalDescent
a1a1b8e94a Added tools/block-timings-sh which can be used to test out new block timings (specified in blockchain.json).
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.

Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]

startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
2021-04-10 17:57:28 +01:00
CalDescent
641a658059 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-04-10 17:49:04 +01:00
CalDescent
44ec447014 Show an error in publish-auto-update.pl if both sha256sum and sha256 aren't found in PATH. 2021-04-01 08:27:56 +01:00
CalDescent
98308ecf98 Bump version to 1.4.6 2021-04-01 08:09:50 +01:00
CalDescent
8d613a6472 MAXIMUM_RETRIES reduced from 3 to 1 2021-03-30 13:07:34 +01:00
CalDescent
c3e5298ecd Added a few checks for Controller.isStopping() in synchronizer loops, to try and speed up the shutdown time. 2021-03-30 13:05:43 +01:00
CalDescent
e89d31eb5a Rewrite of Synchronizer.syncToPeerChain(), this time borrowing ideas from Synchronizer.applyNewBlocks().
Main differences / improvements:
- Only request a single batch of signatures upfront, instead of the entire peer's chain. There is no point in requesting them all, as the later ones may not be valid by the time we have finished requesting all the blocks before them.
- If we fail to fetch a block, clear any queued signatures that are in memory and re-fetch signatures after the last block received. This allows us to cope with peers that re-org whilst we are syncing with them.
- If we can't find any more block signatures, or the peer fails to respond to a block, apply our progress anyway. This should reduce wasted work and network congestion, and helps cope with larger peer re-orgs.
- The retry mechanism remains in place, but instead of fetching the same incorrect block over and over, it will attempt to locate a new block signature each time, as described above. To help reduce code complexity, block signature requests are no longer retried.
2021-03-30 12:29:27 +01:00
CalDescent
30160e2843 Fixes to allow publish-auto-update.sh to work with sha256sum versions that add trailing characters. 2021-03-21 18:15:29 +00:00
catbref
503d22e4d0 Updated Qortal.aip for WindowsInstaller for v1.4.5 2021-03-21 18:05:38 +00:00
CalDescent
b9a0d489d7 Bump version to 1.4.5 2021-03-21 17:06:10 +00:00
catbref
d9d4c4c302 Bump Peer response timeout from 2s to 3s 2021-03-21 16:17:40 +00:00
catbref
81c6d75d62 Adjust Synchronizer.MAXIMUM_BLOCK_STEP to 128, which means final summaries request will have enough to cover MAXIMUM_COMMON_DELTA (8+16+32+64+128 = 248, which is >240) 2021-03-21 16:12:41 +00:00
catbref
d1419bdfbd Minor comments, adjust max step size when searching for common block 2021-03-21 15:57:00 +00:00
CalDescent
8566d9b7e5 Merge branch 'master' into synchronization-improvements 2021-03-21 15:04:43 +00:00
catbref
b319d6db6b Rework BlockMessage caching with new pseudo outgoing-only message that only caches raw bytes 2021-03-21 14:14:15 +00:00
CalDescent
35fd1d8455 Base58 encode signatures in recently added logs. 2021-03-21 14:12:04 +00:00
CalDescent
be21771e49 Use SYNC_BATCH_SIZE instead of MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST. 2021-03-21 13:58:42 +00:00
catbref
745528a9b1 Peer.sendMessage() should return false when it can't send because it can't build the message 2021-03-21 13:19:59 +00:00
CalDescent
f1422af95b Added retry mechanisms in Synchronizer.syncToPeerChain()
Until now, we required a perfect success rate when syncing with a peer via Synchronizer.syncToPeerChain(). Blocks were requested individually, but the node would give up and lose all progress if a single request failed. In practice, this happened very regularly, and it was difficult to succeed when there were a large number of blocks (e.g. 20+) that needed to be requested.

This commit adds two retry mechanisms, causing each of the two request types (block sigs and blocks) to retry 3 times before giving up, potentially avoiding a lot of wasted work. The number of retries is configurable in the MAXIMUM_RETRIES constant, which we could move to settings at some point if this feature proves useful.

The original issue seemed to result in a few side effects:

1. Nodes would spend a large amount of time requesting blocks from peers, only to throw it all away afterwards. This potentially added to network congestion, as nodes were using unnecessary network time to unproductively serve peers.

2. A large number of sync attempts were failing, particularly when a fork emerged with a significant number of divergent blocks (20+). This issue reduced the ability for nodes to sync to the correct chain while they still had time to do so. With every block that passed, it became made it more and more difficult to switch to the correct chain. Eventually, the correct chain would become TOO_DIVERGENT at which point there is no way to automatically switch without manual intervention. I hope that this retry mechanism will increase the chances of nodes automatically moving onto the right chain quickly, avoiding the need for a user to intervene.

3. The POST /admin/forcesync API was unlikely to succeed when the peer's chain had started to diverge from the user's chain. This should increase the success rate.

Also included in this commit is a MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST constant. This limits the number of block sigs requested in each batch (default 200). Without this, we are unable to increase MAXIMUM_COMMON_DELTA because it can try and request thousands of block sigs at once, which unsurprisingly doesn't succeed.
2021-03-21 09:41:36 +00:00
CalDescent
f92f4dc1e2 Fixed some log entries in Controller.syncToPeerChain() which were incorrectly reporting our height instead of the height of block(s) being requested from the peer. Now reporting the height of the block (or block sigs) being retrieved, which should make it easier to interpret the logs. 2021-03-20 16:18:25 +00:00
catbref
019cfdc1db Minor comment re-org 2021-03-20 11:45:11 +00:00
CalDescent
e694a51cdd Fix for "numberSignaturesRequired" calculation error in Synchronizer.syncToPeerChain()
This bug often prevented the correct amount of block signatures (and blocks) from being requested from a peer, when trying to sync to it.

It could result in quite serious consequences, as it would trigger orphaning back to the common block without first requesting all of the necessary blocks from the peer's chain. Rather than applying a complete copy of the peer's chain, it could orphan back to the common block and then only apply a few blocks beyond that, leaving the node in an unexpected state, potentially hundreds of blocks behind the peer's current height, which it then has to try and obtain from other peers.

When there are forks present, this could result in it hopping from chain to chain, each time being unable to fully synchronise with the peer. Given that we currently discard our chain if it is deemed that our latest block isn't "recent", it is very important that nodes are brought up to the latest block when synchronising with a peer, to avoid constantly triggering discards.

The severity of this bug increased when there was a large disparity between the peer's latest block and the common block height, and prevented us from being able to increase MAXIMUM_COMMON_DELTA.
2021-03-20 10:33:23 +00:00
CalDescent
16453ed602 Added unit tests for level 3+4, 5+6, 7+8, and 9+10 rewards.
These are simpler than the level 1+2 tests; they only test that the rewards are correct for each level post-shareBinFix. I don't think we need multiple instances of the pre-shareBinFix or block orphaning tests. There are a few subtle differences between each test, such as the online status of Bob, in order the make the tests slightly more comprehensive.
2021-03-17 08:50:53 +00:00
CalDescent
fde68dc598 Added unit test to test level 1 and 2 rewards.
1. Assign 3 minters (one founder, one level 1, one level 2)
2. Mint a block after the shareBinFix, ensuring that level 1 and 2 are being rewarded evenly from the same share bin.
3. Orphan the block and ensure the rewards are reversed.
4. Orphan two more blocks, each time checking that the balances are being reduced in accordance with the pre-shareBinFix mapping.
2021-03-16 09:11:49 +00:00
QuickMythril
22e3140ff0 add version on tooltip
add Version Number on Qortal Core tooltip.

https://i.imgur.com/eLnLnQ5.png
2021-03-16 03:00:55 -04:00
catbref
4824c4198b Bump version to 1.4.4 2021-03-15 11:00:20 +00:00
catbref
ec7d4f4498 Changed "too busy" logging from debug to trace 2021-03-13 18:30:43 +00:00
catbref
d635de44a8 Added TODO in HSQLDBRepository about deadlock log spam 2021-03-13 18:29:31 +00:00
catbref
bce66bf57f Move HSQLDBRepositoryFactory.POOL_SIZE into Settings as "repositoryConnectionPoolSize" 2021-03-13 18:14:11 +00:00
catbref
0fc5153f9b Merge 'trade-bot-timeout-fix' into master 2021-03-13 17:13:40 +00:00
catbref
0398c2fae1 Try to avoid clogging up network threads by discarding incoming TRANSACTION messages if we're too busy
As importing a transaction requires blockchain lock, all the network threads
can be used up blocking for that lock, especially if Synchronizer is active.

So we simply discard incoming TRANSACTION messages if we can't immediately
obtain the blockchain lock. Some other peer will probably attempt to
send the transaction soon again anyway.

Plus we swap transaction lists after connection handshake.
2021-03-13 17:03:38 +00:00
CalDescent
5fc495eb6a Fix for possible logic bug introduced in commit 33a8f31. 2021-03-12 22:05:38 +00:00
CalDescent
847e81e95c Fixed a mapping issue in Block->getShareBins(), to take effect at some future (undecided) height.
Post trigger, account levels will map correctly to share bins, subtracting 1 to account for the 0th element of the shareBinsByLevel array.
Pre-trigger, the legacy mapping will remain in effect.
2021-03-12 19:48:49 +00:00
CalDescent
7918622e2e Merge pull request #31 from sakumatto/master
Initial Italian translation by Pabs 2021
2021-03-11 11:06:03 +00:00
CalDescent
427fa1816d "blockCacheSize" can now be configured via settings.json. 2021-03-07 10:00:49 +00:00
catbref
0c7e388463 Bump to v1.4.3 2021-02-27 18:24:09 +00:00
catbref
be3af53011 Set new block sig go-live block height: block 320000 2021-02-27 18:23:49 +00:00
catbref
414399b2a0 Merge branch 'blocksig' into master 2021-02-27 18:20:13 +00:00
catbref
c592051a80 Speed up BlockMinter by filtering out 'unconfirmable' transaction types like CHAT & PRESENCE 2021-02-27 17:29:19 +00:00
catbref
33a8f311e5 Reduce logging noise from lost trade-bot ATs and self-clean if AT does not exist after 24 hours 2021-02-24 21:00:52 +00:00
catbref
018c3cdcd4 Allow users to delete trade-bot entries in any state if corresponding AT does not exist 2021-02-24 20:46:47 +00:00
sakumatto
384dffbf9a Initial Italian translation by Pabs 2021
UI localized to Italian by @Pabs
2021-02-22 20:03:11 +02:00
catbref
0306ecb03d AdvancedInstaller updates for v1.4.2 2021-02-21 17:26:32 +00:00
catbref
e5ce732557 More detail in AutoUpdates.md 2021-02-21 17:12:02 +00:00
catbref
91925cf931 Change block "minter" signature code, to take effect at some future (undecided) height.
Post trigger, this change will use all 128 bytes of previous block's signature when
calculating/validating next block's "minter" signature (itself the first 64 bytes of a block signature).

Prior to trigger, current behaviour is to only use first 64 bytes of previous block's
signature, which doesn't encompass transactions signature.

New block sig code should help reduce forking and help improve transactional
security.

Added "newBlockSigHeight" to blockchain.json but initially set to block 999999
pending decision on when to merge, auto-update, go-live, etc.
2021-02-20 12:08:51 +00:00
catbref
1e6e5e66da Fix trailing comma on blockchain.json! 2021-02-06 12:09:24 +00:00
catbref
9b0e88ca87 Only compare same number of blocks when comparing peer chains 2021-02-06 11:40:29 +00:00
catbref
3acc0babb7 More chain-weight tests 2021-02-06 11:19:39 +00:00
81 changed files with 6260 additions and 2651 deletions

BIN
.DS_Store vendored

Binary file not shown.

33
.github/workflows/pr-testing.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: PR testing
on:
pull_request:
branches: [ master ]
jobs:
mavenTesting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Cache local Maven repository
uses: actions/cache@v2
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-
- name: Set up the Java JDK
uses: actions/setup-java@v2
with:
java-version: '11'
distribution: 'adopt'
- name: Run all tests
run: |
mvn -B clean test -DskipTests=false --file pom.xml
if [ -f "target/site/jacoco/index.html" ]; then echo "Total coverage: $(cat target/site/jacoco/index.html | grep -o 'Total[^%]*%' | grep -o '[0-9]*%')"; fi
- name: Log coverage percentage
run: |
if [ ! -f "target/site/jacoco/index.html" ]; then echo "No coverage information available"; fi
if [ -f "target/site/jacoco/index.html" ]; then echo "Total coverage: $(cat target/site/jacoco/index.html | grep -o 'Total[^%]*%' | grep -o '[0-9]*%')"; fi

7
.gitignore vendored
View File

@@ -18,4 +18,9 @@
/run-testnet.sh
/.idea
/qortal.iml
*.DS_Store
.DS_Store
/src/main/resources/resources
/src/main/resources/log*.properties
/*.jar
/run.pid
/run.log

View File

@@ -2,6 +2,7 @@
## TL;DR: how-to
* Prepare new release version (see way below for details)
* Assuming you are in git 'master' branch, at HEAD
* Shutdown local node if running
* Build auto-update download: `tools/build-auto-update.sh` - uploads auto-update file into new git branch
@@ -59,4 +60,12 @@ $ java -cp qortal.jar org.qortal.XorUpdate
usage: XorUpdate <input-file> <output-file>
$ java -cp qortal.jar org.qortal.XorUpdate qortal.jar qortal.update
$
```
```
## Preparing new release version
* Shutdown local node
* Modify `pom.xml` and increase version inside `<version>` tag
* Commit new `pom.xml` and push to github, e.g. `git commit -m 'Bumped to v1.4.2' -- pom.xml; git push`
* Tag this new commit with same version: `git tag v1.4.2`
* Push tag up to github: `git push origin v1.4.2`

File diff suppressed because it is too large Load Diff

View File

@@ -3,12 +3,12 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>1.4.2</version>
<version>1.5.3</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
<altcoinj.version>bf9fb80</altcoinj.version>
<bitcoinj.version>0.15.6</bitcoinj.version>
<bitcoinj.version>0.15.10</bitcoinj.version>
<bouncycastle.version>1.64</bouncycastle.version>
<build.timestamp>${maven.build.timestamp}</build.timestamp>
<ciyam-at.version>1.3.8</ciyam-at.version>
@@ -439,6 +439,11 @@
<artifactId>json-simple</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20210307</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-text</artifactId>

BIN
src/.DS_Store vendored

Binary file not shown.

BIN
src/main/.DS_Store vendored

Binary file not shown.

View File

@@ -2,7 +2,7 @@ package org.qortal.api;
import javax.xml.bind.annotation.adapters.XmlAdapter;
import org.bitcoinj.core.Base58;
import org.qortal.utils.Base58;
public class Base58TypeAdapter extends XmlAdapter<String, byte[]> {

View File

@@ -0,0 +1,23 @@
package org.qortal.api.model;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.math.BigDecimal;
import java.math.BigInteger;
@XmlAccessorType(XmlAccessType.FIELD)
public class BlockMintingInfo {
public byte[] minterPublicKey;
public int minterLevel;
public int onlineAccountsCount;
public BigDecimal maxDistance;
public BigInteger keyDistance;
public double keyDistanceRatio;
public long timestamp;
public long timeDelta;
public BlockMintingInfo() {
}
}

View File

@@ -1,61 +1,74 @@
package org.qortal.api.model;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import io.swagger.v3.oas.annotations.media.Schema;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.network.Handshake;
import org.qortal.network.Peer;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
@XmlAccessorType(XmlAccessType.FIELD)
public class ConnectedPeer {
public enum Direction {
INBOUND,
OUTBOUND;
}
public Direction direction;
public Handshake handshakeStatus;
public Long lastPing;
public Long connectedWhen;
public Long peersConnectedWhen;
public enum Direction {
INBOUND,
OUTBOUND;
}
public String address;
public String version;
public Direction direction;
public Handshake handshakeStatus;
public Long lastPing;
public Long connectedWhen;
public Long peersConnectedWhen;
public String nodeId;
public String address;
public String version;
public Integer lastHeight;
@Schema(example = "base58")
public byte[] lastBlockSignature;
public Long lastBlockTimestamp;
public String nodeId;
protected ConnectedPeer() {
}
public Integer lastHeight;
@Schema(example = "base58")
public byte[] lastBlockSignature;
public Long lastBlockTimestamp;
public UUID connectionId;
public String age;
public ConnectedPeer(Peer peer) {
this.direction = peer.isOutbound() ? Direction.OUTBOUND : Direction.INBOUND;
this.handshakeStatus = peer.getHandshakeStatus();
this.lastPing = peer.getLastPing();
protected ConnectedPeer() {
}
PeerData peerData = peer.getPeerData();
this.connectedWhen = peer.getConnectionTimestamp();
this.peersConnectedWhen = peer.getPeersConnectionTimestamp();
public ConnectedPeer(Peer peer) {
this.direction = peer.isOutbound() ? Direction.OUTBOUND : Direction.INBOUND;
this.handshakeStatus = peer.getHandshakeStatus();
this.lastPing = peer.getLastPing();
this.address = peerData.getAddress().toString();
PeerData peerData = peer.getPeerData();
this.connectedWhen = peer.getConnectionTimestamp();
this.peersConnectedWhen = peer.getPeersConnectionTimestamp();
this.version = peer.getPeersVersionString();
this.nodeId = peer.getPeersNodeId();
this.address = peerData.getAddress().toString();
PeerChainTipData peerChainTipData = peer.getChainTipData();
if (peerChainTipData != null) {
this.lastHeight = peerChainTipData.getLastHeight();
this.lastBlockSignature = peerChainTipData.getLastBlockSignature();
this.lastBlockTimestamp = peerChainTipData.getLastBlockTimestamp();
}
}
this.version = peer.getPeersVersionString();
this.nodeId = peer.getPeersNodeId();
this.connectionId = peer.getPeerConnectionId();
if (peer.getConnectionEstablishedTime() > 0) {
long age = (System.currentTimeMillis() - peer.getConnectionEstablishedTime());
long minutes = TimeUnit.MILLISECONDS.toMinutes(age);
long seconds = TimeUnit.MILLISECONDS.toSeconds(age) - TimeUnit.MINUTES.toSeconds(minutes);
this.age = String.format("%dm %ds", minutes, seconds);
} else {
this.age = "connecting...";
}
PeerChainTipData peerChainTipData = peer.getChainTipData();
if (peerChainTipData != null) {
this.lastHeight = peerChainTipData.getLastHeight();
this.lastBlockSignature = peerChainTipData.getLastBlockSignature();
this.lastBlockTimestamp = peerChainTipData.getLastBlockTimestamp();
}
}
}

View File

@@ -0,0 +1,29 @@
package org.qortal.api.model;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class CrossChainDualSecretRequest {
@Schema(description = "Public key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPublicKey;
@Schema(description = "Qortal AT address")
public String atAddress;
@Schema(description = "secret-A (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secretA;
@Schema(description = "secret-B (32 bytes)", example = "EN2Bgx3BcEMtxFCewmCVSMkfZjVKYhx3KEXC5A21KBGx")
public byte[] secretB;
@Schema(description = "Qortal address for receiving QORT from AT")
public String receivingAddress;
public CrossChainDualSecretRequest() {
}
}

View File

@@ -8,17 +8,14 @@ import io.swagger.v3.oas.annotations.media.Schema;
@XmlAccessorType(XmlAccessType.FIELD)
public class CrossChainSecretRequest {
@Schema(description = "Public key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPublicKey;
@Schema(description = "Private key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPrivateKey;
@Schema(description = "Qortal AT address")
public String atAddress;
@Schema(description = "secret-A (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secretA;
@Schema(description = "secret-B (32 bytes)", example = "EN2Bgx3BcEMtxFCewmCVSMkfZjVKYhx3KEXC5A21KBGx")
public byte[] secretB;
@Schema(description = "Secret (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secret;
@Schema(description = "Qortal address for receiving QORT from AT")
public String receivingAddress;

View File

@@ -542,19 +542,8 @@ public class AdminResource {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lockInterruptibly();
try {
repository.exportNodeLocalData();
return "true";
} finally {
blockchainLock.unlock();
}
} catch (InterruptedException e) {
// We couldn't lock blockchain to perform export
return "false";
repository.exportNodeLocalData();
return "true";
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -564,7 +553,7 @@ public class AdminResource {
@Path("/repository/data")
@Operation(
summary = "Import data into repository.",
description = "Imports data from file on local machine. Filename is forced to 'import.script' if apiKey is not set.",
description = "Imports data from file on local machine. Filename is forced to 'import.json' if apiKey is not set.",
requestBody = @RequestBody(
required = true,
content = @Content(
@@ -588,7 +577,7 @@ public class AdminResource {
// Hard-coded because it's too dangerous to allow user-supplied filenames in weaker security contexts
if (Settings.getInstance().getApiKey() == null)
filename = "import.script";
filename = "import.json";
try (final Repository repository = RepositoryManager.getRepository()) {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();

View File

@@ -1,5 +1,6 @@
package org.qortal.api.resource;
import com.google.common.primitives.Ints;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.media.ArraySchema;
@@ -8,6 +9,11 @@ import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.List;
@@ -20,10 +26,13 @@ import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.qortal.account.Account;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockMintingInfo;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
@@ -32,6 +41,8 @@ import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Base58;
@Path("/blocks")
@@ -80,6 +91,48 @@ public class BlocksResource {
}
}
@GET
@Path("/signature/{signature}/data")
@Operation(
summary = "Fetch serialized, base58 encoded block data using base58 signature",
description = "Returns serialized data for the block that matches the given signature",
responses = {
@ApiResponse(
description = "the block data",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string"))
)
}
)
@ApiErrors({
ApiError.INVALID_SIGNATURE, ApiError.BLOCK_UNKNOWN, ApiError.INVALID_DATA, ApiError.REPOSITORY_ISSUE
})
public String getSerializedBlockData(@PathParam("signature") String signature58) {
// Decode signature
byte[] signature;
try {
signature = Base58.decode(signature58);
} catch (NumberFormatException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_SIGNATURE, e);
}
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
} catch (TransformationException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA, e);
} catch (DataException | IOException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@GET
@Path("/signature/{signature}/transactions")
@Operation(
@@ -328,6 +381,59 @@ public class BlocksResource {
}
}
@GET
@Path("/byheight/{height}/mintinginfo")
@Operation(
summary = "Fetch block minter info using block height",
description = "Returns the minter info for the block with given height",
responses = {
@ApiResponse(
description = "the block",
content = @Content(
schema = @Schema(
implementation = BlockData.class
)
)
)
}
)
@ApiErrors({
ApiError.BLOCK_UNKNOWN, ApiError.REPOSITORY_ISSUE
})
public BlockMintingInfo getBlockMintingInfoByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
Block block = new Block(repository, blockData);
BlockData parentBlockData = repository.getBlockRepository().fromSignature(blockData.getReference());
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
if (minterLevel == 0)
// This may be unavailable when requesting a trimmed block
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
BigInteger distance = block.calcKeyDistance(parentBlockData.getHeight(), parentBlockData.getSignature(), blockData.getMinterPublicKey(), minterLevel);
double ratio = new BigDecimal(distance).divide(new BigDecimal(block.MAX_DISTANCE), 40, RoundingMode.DOWN).doubleValue();
long timestamp = block.calcTimestamp(parentBlockData, blockData.getMinterPublicKey(), minterLevel);
long timeDelta = timestamp - parentBlockData.getTimestamp();
BlockMintingInfo blockMintingInfo = new BlockMintingInfo();
blockMintingInfo.minterPublicKey = blockData.getMinterPublicKey();
blockMintingInfo.minterLevel = minterLevel;
blockMintingInfo.onlineAccountsCount = blockData.getOnlineAccountsCount();
blockMintingInfo.maxDistance = new BigDecimal(block.MAX_DISTANCE);
blockMintingInfo.keyDistance = distance;
blockMintingInfo.keyDistanceRatio = ratio;
blockMintingInfo.timestamp = timestamp;
blockMintingInfo.timeDelta = timeDelta;
return blockMintingInfo;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@GET
@Path("/timestamp/{timestamp}")
@Operation(

View File

@@ -22,7 +22,7 @@ import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainBuildRequest;
import org.qortal.api.model.CrossChainSecretRequest;
import org.qortal.api.model.CrossChainDualSecretRequest;
import org.qortal.api.model.CrossChainTradeRequest;
import org.qortal.asset.Asset;
import org.qortal.crosschain.BitcoinACCTv1;
@@ -242,7 +242,7 @@ public class CrossChainBitcoinACCTv1Resource {
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = CrossChainSecretRequest.class
implementation = CrossChainDualSecretRequest.class
)
)
),
@@ -257,7 +257,7 @@ public class CrossChainBitcoinACCTv1Resource {
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_DATA, ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public String buildRedeemMessage(CrossChainSecretRequest secretRequest) {
public String buildRedeemMessage(CrossChainDualSecretRequest secretRequest) {
Security.checkApiCallAllowed(request);
byte[] partnerPublicKey = secretRequest.partnerPublicKey;

View File

@@ -23,8 +23,8 @@ import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.BitcoinSendRequest;
import org.qortal.crosschain.Bitcoin;
import org.qortal.crosschain.BitcoinyTransaction;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.SimpleTransaction;
@Path("/crosschain/btc")
@Tag(name = "Cross-Chain (Bitcoin)")
@@ -89,12 +89,12 @@ public class CrossChainBitcoinResource {
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = BitcoinyTransaction.class ) ) )
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public List<BitcoinyTransaction> getBitcoinWalletTransactions(String key58) {
public List<SimpleTransaction> getBitcoinWalletTransactions(String key58) {
Security.checkApiCallAllowed(request);
Bitcoin bitcoin = Bitcoin.getInstance();

View File

@@ -16,24 +16,29 @@ import javax.ws.rs.PathParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.bitcoinj.core.TransactionOutput;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bitcoinj.core.*;
import org.bitcoinj.script.Script;
import org.qortal.api.*;
import org.qortal.api.model.CrossChainBitcoinyHTLCStatus;
import org.qortal.crosschain.Bitcoiny;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crosschain.BitcoinyHTLC;
import org.qortal.crosschain.*;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import com.google.common.hash.HashCode;
@Path("/crosschain/htlc")
@Tag(name = "Cross-Chain (Hash time-locked contracts)")
public class CrossChainHtlcResource {
private static final Logger LOGGER = LogManager.getLogger(CrossChainHtlcResource.class);
@Context
HttpServletRequest request;
@@ -41,7 +46,7 @@ public class CrossChainHtlcResource {
@Path("/address/{blockchain}/{refundPKH}/{locktime}/{redeemPKH}/{hashOfSecret}")
@Operation(
summary = "Returns HTLC address based on trade info",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (hex). Locktime is seconds since epoch.",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (base58 encoded). Locktime is seconds since epoch.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string"))
@@ -50,21 +55,21 @@ public class CrossChainHtlcResource {
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_CRITERIA})
public String deriveHtlcAddress(@PathParam("blockchain") String blockchainName,
@PathParam("refundPKH") String refundHex,
@PathParam("refundPKH") String refundPKH,
@PathParam("locktime") int lockTime,
@PathParam("redeemPKH") String redeemHex,
@PathParam("hashOfSecret") String hashOfSecretHex) {
@PathParam("redeemPKH") String redeemPKH,
@PathParam("hashOfSecret") String hashOfSecret) {
SupportedBlockchain blockchain = SupportedBlockchain.valueOf(blockchainName);
if (blockchain == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] refunderPubKeyHash;
byte[] redeemerPubKeyHash;
byte[] hashOfSecret;
byte[] decodedHashOfSecret;
try {
refunderPubKeyHash = HashCode.fromString(refundHex).asBytes();
redeemerPubKeyHash = HashCode.fromString(redeemHex).asBytes();
refunderPubKeyHash = Base58.decode(refundPKH);
redeemerPubKeyHash = Base58.decode(redeemPKH);
if (refunderPubKeyHash.length != 20 || redeemerPubKeyHash.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
@@ -73,14 +78,14 @@ public class CrossChainHtlcResource {
}
try {
hashOfSecret = HashCode.fromString(hashOfSecretHex).asBytes();
if (hashOfSecret.length != 20)
decodedHashOfSecret = Base58.decode(hashOfSecret);
if (decodedHashOfSecret.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
} catch (IllegalArgumentException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, hashOfSecret);
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, decodedHashOfSecret);
Bitcoiny bitcoiny = (Bitcoiny) blockchain.getInstance();
@@ -91,7 +96,7 @@ public class CrossChainHtlcResource {
@Path("/status/{blockchain}/{refundPKH}/{locktime}/{redeemPKH}/{hashOfSecret}")
@Operation(
summary = "Checks HTLC status",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (hex). Locktime is seconds since epoch.",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (base58 encoded). Locktime is seconds since epoch.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.APPLICATION_JSON, schema = @Schema(implementation = CrossChainBitcoinyHTLCStatus.class))
@@ -100,10 +105,10 @@ public class CrossChainHtlcResource {
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public CrossChainBitcoinyHTLCStatus checkHtlcStatus(@PathParam("blockchain") String blockchainName,
@PathParam("refundPKH") String refundHex,
@PathParam("refundPKH") String refundPKH,
@PathParam("locktime") int lockTime,
@PathParam("redeemPKH") String redeemHex,
@PathParam("hashOfSecret") String hashOfSecretHex) {
@PathParam("redeemPKH") String redeemPKH,
@PathParam("hashOfSecret") String hashOfSecret) {
Security.checkApiCallAllowed(request);
SupportedBlockchain blockchain = SupportedBlockchain.valueOf(blockchainName);
@@ -112,11 +117,11 @@ public class CrossChainHtlcResource {
byte[] refunderPubKeyHash;
byte[] redeemerPubKeyHash;
byte[] hashOfSecret;
byte[] decodedHashOfSecret;
try {
refunderPubKeyHash = HashCode.fromString(refundHex).asBytes();
redeemerPubKeyHash = HashCode.fromString(redeemHex).asBytes();
refunderPubKeyHash = Base58.decode(refundPKH);
redeemerPubKeyHash = Base58.decode(redeemPKH);
if (refunderPubKeyHash.length != 20 || redeemerPubKeyHash.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
@@ -125,14 +130,14 @@ public class CrossChainHtlcResource {
}
try {
hashOfSecret = HashCode.fromString(hashOfSecretHex).asBytes();
if (hashOfSecret.length != 20)
decodedHashOfSecret = Base58.decode(hashOfSecret);
if (decodedHashOfSecret.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
} catch (IllegalArgumentException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, hashOfSecret);
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, decodedHashOfSecret);
Bitcoiny bitcoiny = (Bitcoiny) blockchain.getInstance();
@@ -168,8 +173,431 @@ public class CrossChainHtlcResource {
}
}
// TODO: refund
@GET
@Path("/redeem/LITECOIN/{ataddress}/{tradePrivateKey}/{secret}/{receivingAddress}")
@Operation(
summary = "Redeems HTLC associated with supplied AT, using private key, secret, and receiving address",
description = "Secret and private key should be 32 bytes (base58 encoded). Receiving address must be a valid LTC P2PKH address.<br>" +
"The secret can be found in Alice's trade bot data or in the message to Bob's AT.<br>" +
"The trade private key and receiving address can be found in Bob's trade bot data.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean redeemHtlc(@PathParam("ataddress") String atAddress,
@PathParam("tradePrivateKey") String tradePrivateKey,
@PathParam("secret") String secret,
@PathParam("receivingAddress") String receivingAddress) {
Security.checkApiCallAllowed(request);
// TODO: redeem
// base58 decode the trade private key
byte[] decodedTradePrivateKey = null;
if (tradePrivateKey != null)
decodedTradePrivateKey = Base58.decode(tradePrivateKey);
}
// base58 decode the secret
byte[] decodedSecret = null;
if (secret != null)
decodedSecret = Base58.decode(secret);
// Convert supplied Litecoin receiving address into public key hash (we only support P2PKH at this time)
Address litecoinReceivingAddress;
try {
litecoinReceivingAddress = Address.fromString(Litecoin.getInstance().getNetworkParameters(), receivingAddress);
} catch (AddressFormatException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
if (litecoinReceivingAddress.getOutputScriptType() != Script.ScriptType.P2PKH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] litecoinReceivingAccountInfo = litecoinReceivingAddress.getHash();
return this.doRedeemHtlc(atAddress, decodedTradePrivateKey, decodedSecret, litecoinReceivingAccountInfo);
}
@GET
@Path("/redeem/LITECOIN/{ataddress}")
@Operation(
summary = "Redeems HTLC associated with supplied AT",
description = "To be used by a QORT seller (Bob) who needs to redeem LTC proceeds that are stuck in a P2SH.<br>" +
"This requires Bob's trade bot data to be present in the database for this AT.<br>" +
"It will fail if the buyer has yet to redeem the QORT held in the AT.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean redeemHtlc(@PathParam("ataddress") String atAddress) {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Attempt to find secret from the buyer's message to AT
byte[] decodedSecret = LitecoinACCTv1.findSecretA(repository, crossChainTradeData);
if (decodedSecret == null) {
LOGGER.info(() -> String.format("Unable to find secret-A from redeem message to AT %s", atAddress));
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
// Search for the tradePrivateKey in the tradebot data
byte[] decodedPrivateKey = null;
if (tradeBotData != null)
decodedPrivateKey = tradeBotData.getTradePrivateKey();
// Search for the litecoin receiving address in the tradebot data
byte[] litecoinReceivingAccountInfo = null;
if (tradeBotData != null)
// Use receiving address PKH from tradebot data
litecoinReceivingAccountInfo = tradeBotData.getReceivingAccountInfo();
return this.doRedeemHtlc(atAddress, decodedPrivateKey, decodedSecret, litecoinReceivingAccountInfo);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@GET
@Path("/redeemAll/LITECOIN")
@Operation(
summary = "Redeems HTLC for all applicable ATs in tradebot data",
description = "To be used by a QORT seller (Bob) who needs to redeem LTC proceeds that are stuck in P2SH transactions.<br>" +
"This requires Bob's trade bot data to be present in the database for any ATs that need redeeming.<br>" +
"Returns true if at least one trade is redeemed. More detail is available in the log.txt.* file.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean redeemAllHtlc() {
Security.checkApiCallAllowed(request);
boolean success = false;
try (final Repository repository = RepositoryManager.getRepository()) {
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
for (TradeBotData tradeBotData : allTradeBotData) {
String atAddress = tradeBotData.getAtAddress();
if (atAddress == null) {
LOGGER.info("Missing AT address in tradebot data", atAddress);
continue;
}
String tradeState = tradeBotData.getState();
if (tradeState == null) {
LOGGER.info("Missing trade state for AT {}", atAddress);
continue;
}
if (tradeState.startsWith("ALICE")) {
LOGGER.info("AT {} isn't redeemable because it is a buy order", atAddress);
continue;
}
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null) {
LOGGER.info("Couldn't find AT with address {}", atAddress);
continue;
}
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null) {
continue;
}
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null) {
LOGGER.info("Couldn't find crosschain trade data for AT {}", atAddress);
continue;
}
// Attempt to find secret from the buyer's message to AT
byte[] decodedSecret = LitecoinACCTv1.findSecretA(repository, crossChainTradeData);
if (decodedSecret == null) {
LOGGER.info("Unable to find secret-A from redeem message to AT {}", atAddress);
continue;
}
// Search for the tradePrivateKey in the tradebot data
byte[] decodedPrivateKey = tradeBotData.getTradePrivateKey();
// Search for the litecoin receiving address PKH in the tradebot data
byte[] litecoinReceivingAccountInfo = tradeBotData.getReceivingAccountInfo();
try {
LOGGER.info("Attempting to redeem P2SH balance associated with AT {}...", atAddress);
boolean redeemed = this.doRedeemHtlc(atAddress, decodedPrivateKey, decodedSecret, litecoinReceivingAccountInfo);
if (redeemed) {
LOGGER.info("Redeemed P2SH balance associated with AT {}", atAddress);
success = true;
}
else {
LOGGER.info("Couldn't redeem P2SH balance associated with AT {}. Already redeemed?", atAddress);
}
} catch (ApiException e) {
LOGGER.info("Couldn't redeem P2SH balance associated with AT {}. Missing data?", atAddress);
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
return success;
}
private boolean doRedeemHtlc(String atAddress, byte[] decodedTradePrivateKey, byte[] decodedSecret, byte[] litecoinReceivingAccountInfo) {
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate trade private key
if (decodedTradePrivateKey == null || decodedTradePrivateKey.length != 32)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate secret
if (decodedSecret == null || decodedSecret.length != 32)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate receiving address
if (litecoinReceivingAccountInfo == null || litecoinReceivingAccountInfo.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Make sure the receiving address isn't a QORT address, given that we can share the same field for both QORT and LTC
if (Crypto.isValidAddress(litecoinReceivingAccountInfo))
if (Base58.encode(litecoinReceivingAccountInfo).startsWith("Q"))
// This is likely a QORT address, not an LTC
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Use secret-A to redeem P2SH-A
Litecoin litecoin = Litecoin.getInstance();
int lockTime = crossChainTradeData.lockTimeA;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(crossChainTradeData.partnerForeignPKH, lockTime, crossChainTradeData.creatorForeignPKH, crossChainTradeData.hashOfSecretA);
String p2shAddressA = litecoin.deriveP2shAddress(redeemScriptA);
LOGGER.info(String.format("Redeeming P2SH address: %s", p2shAddressA));
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTime, crossChainTradeData.tradeTimeout);
long p2shFee = Litecoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(litecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// P2SH-A suddenly not funded? Our best bet at this point is to hope for AT auto-refund
return false;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Double-check that we have redeemed P2SH-A...
return false;
case REFUND_IN_PROGRESS:
case REFUNDED:
// Wait for AT to auto-refund
return false;
case FUNDED: {
Coin redeemAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey redeemKey = ECKey.fromPrivate(decodedTradePrivateKey);
List<TransactionOutput> fundingOutputs = litecoin.getUnspentOutputs(p2shAddressA);
Transaction p2shRedeemTransaction = BitcoinyHTLC.buildRedeemTransaction(litecoin.getNetworkParameters(), redeemAmount, redeemKey,
fundingOutputs, redeemScriptA, decodedSecret, litecoinReceivingAccountInfo);
litecoin.broadcastTransaction(p2shRedeemTransaction);
return true; // TODO: validate?
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
return false;
}
@GET
@Path("/refund/LITECOIN/{ataddress}")
@Operation(
summary = "Refunds HTLC associated with supplied AT",
description = "To be used by a QORT buyer (Alice) who needs to refund their LTC that is stuck in a P2SH.<br>" +
"This requires Alice's trade bot data to be present in the database for this AT.<br>" +
"It will fail if it's already redeemed by the seller, or if the lockTime (60 minutes) hasn't passed yet.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean refundHtlc(@PathParam("ataddress") String atAddress) {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
if (tradeBotData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
if (tradeBotData.getForeignKey() == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Determine LTC receive address for refund
Litecoin litecoin = Litecoin.getInstance();
String receiveAddress = litecoin.getUnusedReceiveAddress(tradeBotData.getForeignKey());
return this.doRefundHtlc(atAddress, receiveAddress);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
}
@GET
@Path("/refund/LITECOIN/{ataddress}/{receivingAddress}")
@Operation(
summary = "Refunds HTLC associated with supplied AT, to the specified LTC receiving address",
description = "To be used by a QORT buyer (Alice) who needs to refund their LTC that is stuck in a P2SH.<br>" +
"This requires Alice's trade bot data to be present in the database for this AT.<br>" +
"It will fail if it's already redeemed by the seller, or if the lockTime (60 minutes) hasn't passed yet.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean refundHtlc(@PathParam("ataddress") String atAddress,
@PathParam("receivingAddress") String receivingAddress) {
Security.checkApiCallAllowed(request);
return this.doRefundHtlc(atAddress, receivingAddress);
}
private boolean doRefundHtlc(String atAddress, String receiveAddress) {
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
if (tradeBotData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
int lockTime = tradeBotData.getLockTimeA();
// We can't refund P2SH-A until lockTime-A has passed
if (NTP.getTime() <= lockTime * 1000L)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
Litecoin litecoin = Litecoin.getInstance();
// We can't refund P2SH-A until median block time has passed lockTime-A (see BIP113)
int medianBlockTime = litecoin.getMedianBlockTime();
if (medianBlockTime <= lockTime)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTime, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = litecoin.deriveP2shAddress(redeemScriptA);
LOGGER.info(String.format("Refunding P2SH address: %s", p2shAddressA));
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTime, crossChainTradeData.tradeTimeout);
long p2shFee = Litecoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(litecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// Still waiting for P2SH-A to be funded...
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
case REDEEM_IN_PROGRESS:
case REDEEMED:
case REFUND_IN_PROGRESS:
case REFUNDED:
// Too late!
return false;
case FUNDED:{
Coin refundAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey refundKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = litecoin.getUnspentOutputs(p2shAddressA);
// Validate the destination LTC address
Address receiving = Address.fromString(litecoin.getNetworkParameters(), receiveAddress);
if (receiving.getOutputScriptType() != Script.ScriptType.P2PKH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
Transaction p2shRefundTransaction = BitcoinyHTLC.buildRefundTransaction(litecoin.getNetworkParameters(), refundAmount, refundKey,
fundingOutputs, redeemScriptA, lockTime, receiving.getHash());
litecoin.broadcastTransaction(p2shRefundTransaction);
return true; // TODO: validate?
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
return false;
}
private long calcFeeTimestamp(int lockTimeA, int tradeTimeout) {
return (lockTimeA - tradeTimeout * 60) * 1000L;
}
}

View File

@@ -0,0 +1,145 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainSecretRequest;
import org.qortal.crosschain.AcctMode;
import org.qortal.crosschain.LitecoinACCTv1;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.MessageTransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import java.util.Arrays;
import java.util.Random;
@Path("/crosschain/LitecoinACCTv1")
@Tag(name = "Cross-Chain (LitecoinACCTv1)")
public class CrossChainLitecoinACCTv1Resource {
@Context
HttpServletRequest request;
@POST
@Path("/redeemmessage")
@Operation(
summary = "Signs and broadcasts a 'redeem' MESSAGE transaction that sends secrets to AT, releasing funds to partner",
description = "Specify address of cross-chain AT that needs to be messaged, Alice's trade private key, the 32-byte secret,<br>"
+ "and an address for receiving QORT from AT. All of these can be found in Alice's trade bot data.<br>"
+ "AT needs to be in 'trade' mode. Messages sent to an AT in any other mode will be ignored, but still cost fees to send!<br>"
+ "You need to use the private key that the AT considers the trade 'partner' otherwise the MESSAGE transaction will be invalid.",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = CrossChainSecretRequest.class
)
)
),
responses = {
@ApiResponse(
content = @Content(
schema = @Schema(
type = "string"
)
)
)
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_DATA, ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public boolean buildRedeemMessage(CrossChainSecretRequest secretRequest) {
Security.checkApiCallAllowed(request);
byte[] partnerPrivateKey = secretRequest.partnerPrivateKey;
if (partnerPrivateKey == null || partnerPrivateKey.length != Transformer.PRIVATE_KEY_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
if (secretRequest.atAddress == null || !Crypto.isValidAtAddress(secretRequest.atAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (secretRequest.secret == null || secretRequest.secret.length != LitecoinACCTv1.SECRET_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
if (secretRequest.receivingAddress == null || !Crypto.isValidAddress(secretRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = fetchAtDataWithChecking(repository, secretRequest.atAddress);
CrossChainTradeData crossChainTradeData = LitecoinACCTv1.getInstance().populateTradeData(repository, atData);
if (crossChainTradeData.mode != AcctMode.TRADING)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] partnerPublicKey = new PrivateKeyAccount(null, partnerPrivateKey).getPublicKey();
String partnerAddress = Crypto.toAddress(partnerPublicKey);
// MESSAGE must come from address that AT considers trade partner
if (!crossChainTradeData.qortalPartnerAddress.equals(partnerAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
// Good to make MESSAGE
byte[] messageData = LitecoinACCTv1.buildRedeemMessage(secretRequest.secret, secretRequest.receivingAddress);
PrivateKeyAccount sender = new PrivateKeyAccount(repository, partnerPrivateKey);
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, secretRequest.atAddress, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.TRANSACTION_INVALID);
return true;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
private ATData fetchAtDataWithChecking(Repository repository, String atAddress) throws DataException {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Must be correct AT - check functionality using code hash
if (!Arrays.equals(atData.getCodeHash(), LitecoinACCTv1.CODE_BYTES_HASH))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// No point sending message to AT that's finished
if (atData.getIsFinished())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
return atData;
}
}

View File

@@ -22,9 +22,9 @@ import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.LitecoinSendRequest;
import org.qortal.crosschain.BitcoinyTransaction;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.Litecoin;
import org.qortal.crosschain.SimpleTransaction;
@Path("/crosschain/ltc")
@Tag(name = "Cross-Chain (Litecoin)")
@@ -89,12 +89,12 @@ public class CrossChainLitecoinResource {
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = BitcoinyTransaction.class ) ) )
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public List<BitcoinyTransaction> getLitecoinWalletTransactions(String key58) {
public List<SimpleTransaction> getLitecoinWalletTransactions(String key58) {
Security.checkApiCallAllowed(request);
Litecoin litecoin = Litecoin.getInstance();

View File

@@ -255,13 +255,19 @@ public class CrossChainResource {
description = "foreign blockchain",
example = "LITECOIN",
schema = @Schema(implementation = SupportedBlockchain.class)
) @PathParam("blockchain") SupportedBlockchain foreignBlockchain) {
) @PathParam("blockchain") SupportedBlockchain foreignBlockchain,
@Parameter(
description = "Maximum number of trades to include in price calculation",
example = "10",
schema = @Schema(type = "integer", defaultValue = "10")
) @QueryParam("maxtrades") Integer maxtrades) {
// foreignBlockchain is required
if (foreignBlockchain == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// We want both a minimum of 5 trades and enough trades to span at least 4 hours
int minimumCount = 5;
int maximumCount = maxtrades != null ? maxtrades : 10;
long minimumPeriod = 4 * 60 * 60 * 1000L; // ms
Boolean isFinished = Boolean.TRUE;
@@ -276,7 +282,7 @@ public class CrossChainResource {
ACCT acct = acctInfo.getValue().get();
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStatesQuorum(codeHash,
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumCount, minimumPeriod);
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumCount, maximumCount, minimumPeriod);
for (ATStateData atState : atStates) {
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atState);

View File

@@ -321,7 +321,7 @@ public class PeersResource {
boolean force = true;
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = Synchronizer.getInstance().fetchSummariesFromCommonBlock(repository, targetPeer, ourInitialHeight, force, peerBlockSummaries);
SynchronizationResult findCommonBlockResult = Synchronizer.getInstance().fetchSummariesFromCommonBlock(repository, targetPeer, ourInitialHeight, force, peerBlockSummaries, true);
if (findCommonBlockResult != SynchronizationResult.OK)
return null;

View File

@@ -176,19 +176,26 @@ public class Block {
*
* @return account-level share "bin" from blockchain config, or null if founder / none found
*/
public AccountLevelShareBin getShareBin() {
public AccountLevelShareBin getShareBin(int blockHeight) {
if (this.isMinterFounder)
return null;
final int accountLevel = this.mintingAccountData.getLevel();
if (accountLevel <= 0)
return null;
return null; // level 0 isn't included in any share bins
final AccountLevelShareBin[] shareBinsByLevel = BlockChain.getInstance().getShareBinsByAccountLevel();
final BlockChain blockChain = BlockChain.getInstance();
final AccountLevelShareBin[] shareBinsByLevel = blockChain.getShareBinsByAccountLevel();
if (accountLevel > shareBinsByLevel.length)
return null;
return shareBinsByLevel[accountLevel];
if (blockHeight < blockChain.getShareBinFixHeight())
// Off-by-one bug still in effect
return shareBinsByLevel[accountLevel];
// level 1 stored at index 0, level 2 stored at index 1, etc.
return shareBinsByLevel[accountLevel-1];
}
public long distribute(long accountAmount, Map<String, Long> balanceChanges) {
@@ -225,7 +232,7 @@ public class Block {
// Other useful constants
private static final BigInteger MAX_DISTANCE;
public static final BigInteger MAX_DISTANCE;
static {
byte[] maxValue = new byte[Transformer.PUBLIC_KEY_LENGTH];
Arrays.fill(maxValue, (byte) 0xFF);
@@ -357,7 +364,7 @@ public class Block {
System.arraycopy(onlineAccountData.getSignature(), 0, onlineAccountsSignatures, i * Transformer.SIGNATURE_LENGTH, Transformer.SIGNATURE_LENGTH);
}
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData.getMinterSignature(),
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData,
minter.getPublicKey(), encodedOnlineAccounts));
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
@@ -424,7 +431,7 @@ public class Block {
int version = this.blockData.getVersion();
byte[] reference = this.blockData.getReference();
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData.getMinterSignature(),
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData,
minter.getPublicKey(), this.blockData.getEncodedOnlineAccounts()));
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
@@ -738,11 +745,7 @@ public class Block {
if (!(this.minter instanceof PrivateKeyAccount))
throw new IllegalStateException("Block's minter is not a PrivateKeyAccount - can't sign!");
try {
this.blockData.setMinterSignature(((PrivateKeyAccount) this.minter).sign(BlockTransformer.getBytesForMinterSignature(this.blockData)));
} catch (TransformationException e) {
throw new RuntimeException("Unable to calculate block's minter signature", e);
}
this.blockData.setMinterSignature(((PrivateKeyAccount) this.minter).sign(BlockTransformer.getBytesForMinterSignature(this.blockData)));
}
/**
@@ -793,7 +796,9 @@ public class Block {
NumberFormat formatter = new DecimalFormat("0.###E0");
boolean isLogging = LOGGER.getLevel().isLessSpecificThan(Level.TRACE);
int blockCount = 0;
for (BlockSummaryData blockSummaryData : blockSummaries) {
blockCount++;
StringBuilder stringBuilder = isLogging ? new StringBuilder(512) : null;
if (isLogging)
@@ -822,11 +827,11 @@ public class Block {
parentHeight = blockSummaryData.getHeight();
parentBlockSignature = blockSummaryData.getSignature();
/* Potential future consensus change: only comparing the same number of blocks.
if (parentHeight >= maxHeight)
// After this timestamp, we only compare the same number of blocks
if (NTP.getTime() >= BlockChain.getInstance().getCalcChainWeightTimestamp() && parentHeight >= maxHeight)
break;
*/
}
LOGGER.trace(String.format("Chain weight calculation was based on %d blocks", blockCount));
return cumulativeWeight;
}
@@ -1332,6 +1337,9 @@ public class Block {
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
Controller.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
}
protected void increaseAccountLevels() throws DataException {
@@ -1513,6 +1521,9 @@ public class Block {
public void orphan() throws DataException {
LOGGER.trace(() -> String.format("Orphaning block %d", this.blockData.getHeight()));
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
// Return AT fees and delete AT states from repository
orphanAtFeesAndStates();
@@ -1787,7 +1798,7 @@ public class Block {
// Find all accounts in share bin. getShareBin() returns null for minter accounts that are also founders, so they are effectively filtered out.
AccountLevelShareBin accountLevelShareBin = accountLevelShareBins.get(binIndex);
// Object reference compare is OK as all references are read-only from blockchain config.
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.getShareBin() == accountLevelShareBin).collect(Collectors.toList());
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.getShareBin(this.blockData.getHeight()) == accountLevelShareBin).collect(Collectors.toList());
// No online accounts in this bin? Skip to next one
if (binnedAccounts.isEmpty())
@@ -1985,4 +1996,38 @@ public class Block {
this.repository.getAccountRepository().tidy();
}
private void logDebugInfo() {
try {
// Avoid calculations if possible. We have to check against INFO here, since Level.isMoreSpecificThan() confusingly uses <= rather than just <
if (LOGGER.getLevel().isMoreSpecificThan(Level.INFO))
return;
if (this.repository == null || this.getMinter() == null || this.getBlockData() == null)
return;
int minterLevel = Account.getRewardShareEffectiveMintingLevel(this.repository, this.getMinter().getPublicKey());
LOGGER.debug(String.format("======= BLOCK %d (%.8s) =======", this.getBlockData().getHeight(), Base58.encode(this.getSignature())));
LOGGER.debug(String.format("Timestamp: %d", this.getBlockData().getTimestamp()));
LOGGER.debug(String.format("Minter level: %d", minterLevel));
LOGGER.debug(String.format("Online accounts: %d", this.getBlockData().getOnlineAccountsCount()));
LOGGER.debug(String.format("AT count: %d", this.getBlockData().getATCount()));
BlockSummaryData blockSummaryData = new BlockSummaryData(this.getBlockData());
if (this.getParent() == null || this.getParent().getSignature() == null || blockSummaryData == null || minterLevel == 0)
return;
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = calcBlockWeight(this.getParent().getHeight(), this.getParent().getSignature(), blockSummaryData);
BigInteger keyDistance = calcKeyDistance(this.getParent().getHeight(), this.getParent().getSignature(), blockSummaryData.getMinterPublicKey(), blockSummaryData.getMinterLevel());
NumberFormat formatter = new DecimalFormat("0.###E0");
LOGGER.debug(String.format("Key distance: %s", formatter.format(keyDistance)));
LOGGER.debug(String.format("Weight: %s", formatter.format(blockWeight)));
} catch (DataException e) {
LOGGER.info(() -> String.format("Unable to log block debugging info: %s", e.getMessage()));
}
}
}

View File

@@ -70,7 +70,10 @@ public class BlockChain {
private GenesisBlock.GenesisInfo genesisInfo;
public enum FeatureTrigger {
atFindNextTransactionFix;
atFindNextTransactionFix,
newBlockSigHeight,
shareBinFix,
calcChainWeightTimestamp;
}
/** Map of which blockchain features are enabled when (height/timestamp) */
@@ -376,6 +379,18 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.atFindNextTransactionFix.name()).intValue();
}
public int getNewBlockSigHeight() {
return this.featureTriggers.get(FeatureTrigger.newBlockSigHeight.name()).intValue();
}
public int getShareBinFixHeight() {
return this.featureTriggers.get(FeatureTrigger.shareBinFix.name()).intValue();
}
public long getCalcChainWeightTimestamp() {
return this.featureTriggers.get(FeatureTrigger.calcChainWeightTimestamp.name()).longValue();
}
// More complex getters for aspects that change by height or timestamp
public long getRewardAtHeight(int ourHeight) {

View File

@@ -135,16 +135,19 @@ public class BlockMinter extends Thread {
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block
peers.removeIf(Controller.hasNoRecentBlock);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (Controller.getInstance().getRecoveryMode() == false)
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If our latest block isn't recent then we need to synchronize instead of minting.
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
continue;
if (Controller.getInstance().getRecoveryMode() == false)
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
@@ -165,6 +168,14 @@ public class BlockMinter extends Thread {
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
final byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
final boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
@@ -282,15 +293,17 @@ public class BlockMinter extends Thread {
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s by %s on behalf of %s",
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s by %s",
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
}

View File

@@ -67,8 +67,8 @@ import org.qortal.gui.SysTray;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.ArbitraryDataMessage;
import org.qortal.network.message.BlockMessage;
import org.qortal.network.message.BlockSummariesMessage;
import org.qortal.network.message.CachedBlockMessage;
import org.qortal.network.message.GetArbitraryDataMessage;
import org.qortal.network.message.GetBlockMessage;
import org.qortal.network.message.GetBlockSummariesMessage;
@@ -121,6 +121,7 @@ public class Controller extends Thread {
private static final long NTP_PRE_SYNC_CHECK_PERIOD = 5 * 1000L; // ms
private static final long NTP_POST_SYNC_CHECK_PERIOD = 5 * 60 * 1000L; // ms
private static final long DELETE_EXPIRED_INTERVAL = 5 * 60 * 1000L; // ms
private static final long RECOVERY_MODE_TIMEOUT = 10 * 60 * 1000L; // ms
// To do with online accounts list
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
@@ -143,16 +144,15 @@ public class Controller extends Thread {
private ExecutorService callbackExecutor = Executors.newFixedThreadPool(3);
private volatile boolean notifyGroupMembershipChange = false;
private static final int BLOCK_CACHE_SIZE = 10; // To cover typical Synchronizer request + a few spare
/** Latest blocks on our chain. Note: tail/last is the latest block. */
private final Deque<BlockData> latestBlocks = new LinkedList<>();
/** Cache of BlockMessages, indexed by block signature */
@SuppressWarnings("serial")
private final LinkedHashMap<ByteArray, BlockMessage> blockMessageCache = new LinkedHashMap<>() {
private final LinkedHashMap<ByteArray, CachedBlockMessage> blockMessageCache = new LinkedHashMap<>() {
@Override
protected boolean removeEldestEntry(Map.Entry<ByteArray, BlockMessage> eldest) {
return this.size() > BLOCK_CACHE_SIZE;
protected boolean removeEldestEntry(Map.Entry<ByteArray, CachedBlockMessage> eldest) {
return this.size() > Settings.getInstance().getBlockCacheSize();
}
};
@@ -176,6 +176,11 @@ public class Controller extends Thread {
/** Latest block signatures from other peers that we know are on inferior chains. */
List<ByteArray> inferiorChainSignatures = new ArrayList<>();
/** Recovery mode, which is used to bring back a stalled network */
private boolean recoveryMode = false;
private boolean peersAvailable = true; // peersAvailable must default to true
private long timePeersLastAvailable = 0;
/**
* Map of recent requests for ARBITRARY transaction data payloads.
* <p>
@@ -319,11 +324,12 @@ public class Controller extends Thread {
// Set initial chain height/tip
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().getLastBlock();
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
synchronized (this.latestBlocks) {
this.latestBlocks.clear();
for (int i = 0; i < BLOCK_CACHE_SIZE && blockData != null; ++i) {
for (int i = 0; i < blockCacheSize && blockData != null; ++i) {
this.latestBlocks.addFirst(blockData);
blockData = repository.getBlockRepository().fromHeight(blockData.getHeight() - 1);
}
@@ -358,6 +364,10 @@ public class Controller extends Thread {
}
}
public boolean getRecoveryMode() {
return this.recoveryMode;
}
// Entry point
public static void main(String[] args) {
@@ -613,6 +623,11 @@ public class Controller extends Thread {
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || inferiorChainTips.contains(new ByteArray(peerChainTipData.getLastBlockSignature()));
};
public static final Predicate<Peer> hasOldVersion = peer -> {
final String minPeerVersion = Settings.getInstance().getMinPeerVersion();
return peer.isAtLeastVersion(minPeerVersion) == false;
};
private void potentiallySynchronize() throws InterruptedException {
// Already synchronizing via another thread?
if (this.isSynchronizing)
@@ -629,6 +644,17 @@ public class Controller extends Thread {
// Disregard peers that don't have a recent block
peers.removeIf(hasNoRecentBlock);
// Disregard peers that are on an old version
peers.removeIf(hasOldVersion);
checkRecoveryModeForPeers(peers);
if (recoveryMode) {
peers = Network.getInstance().getHandshakedPeers();
peers.removeIf(hasOnlyGenesisBlock);
peers.removeIf(hasMisbehaved);
peers.removeIf(hasOldVersion);
}
// Check we have enough peers to potentially synchronize
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
return;
@@ -639,9 +665,31 @@ public class Controller extends Thread {
// Disregard peers that are on the same block as last sync attempt and we didn't like their chain
peers.removeIf(hasInferiorChainTip);
final int peersBeforeComparison = peers.size();
// Request recent block summaries from the remaining peers, and locate our common block with each
Synchronizer.getInstance().findCommonBlocksWithPeers(peers);
// Compare the peers against each other, and against our chain, which will return an updated list excluding those without common blocks
peers = Synchronizer.getInstance().comparePeers(peers);
// We may have added more inferior chain tips when comparing peers, so remove any peers that are currently on those chains
peers.removeIf(hasInferiorChainTip);
final int peersRemoved = peersBeforeComparison - peers.size();
if (peersRemoved > 0 && peers.size() > 0)
LOGGER.info(String.format("Ignoring %d peers on inferior chains. Peers remaining: %d", peersRemoved, peers.size()));
if (peers.isEmpty())
return;
if (peers.size() > 1) {
StringBuilder finalPeersString = new StringBuilder();
for (Peer peer : peers)
finalPeersString = finalPeersString.length() > 0 ? finalPeersString.append(", ").append(peer) : finalPeersString.append(peer);
LOGGER.info(String.format("Choosing random peer from: [%s]", finalPeersString.toString()));
}
// Pick random peer to sync with
int index = new SecureRandom().nextInt(peers.size());
Peer peer = peers.get(index);
@@ -744,6 +792,46 @@ public class Controller extends Thread {
}
}
private boolean checkRecoveryModeForPeers(List<Peer> qualifiedPeers) {
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
if (handshakedPeers.size() > 0) {
// There is at least one handshaked peer
if (qualifiedPeers.isEmpty()) {
// There are no 'qualified' peers - i.e. peers that have a recent block we can sync to
boolean werePeersAvailable = peersAvailable;
peersAvailable = false;
// If peers only just became unavailable, update our record of the time they were last available
if (werePeersAvailable)
timePeersLastAvailable = NTP.getTime();
// If enough time has passed, enter recovery mode, which lifts some restrictions on who we can sync with and when we can mint
if (NTP.getTime() - timePeersLastAvailable > RECOVERY_MODE_TIMEOUT) {
if (recoveryMode == false) {
LOGGER.info(String.format("Peers have been unavailable for %d minutes. Entering recovery mode...", RECOVERY_MODE_TIMEOUT/60/1000));
recoveryMode = true;
}
}
} else {
// We now have at least one peer with a recent block, so we can exit recovery mode and sync normally
peersAvailable = true;
if (recoveryMode) {
LOGGER.info("Peers have become available again. Exiting recovery mode...");
recoveryMode = false;
}
}
}
return recoveryMode;
}
public void addInferiorChainSignature(byte[] inferiorSignature) {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = new ByteArray(inferiorSignature);
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
}
public static class StatusChangeEvent implements Event {
public StatusChangeEvent() {
}
@@ -775,7 +863,7 @@ public class Controller extends Thread {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
}
String tooltip = String.format("%s - %d %s - %s %d", actionText, numberOfPeers, connectionsText, heightText, height);
String tooltip = String.format("%s - %d %s - %s %d", actionText, numberOfPeers, connectionsText, heightText, height) + "\n" + String.format("Build version: %s", this.buildVersion);
SysTray.getInstance().setToolTipText(tooltip);
this.callbackExecutor.execute(() -> {
@@ -933,6 +1021,7 @@ public class Controller extends Thread {
public void onNewBlock(BlockData latestBlockData) {
// Protective copy
BlockData blockDataCopy = new BlockData(latestBlockData);
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
synchronized (this.latestBlocks) {
BlockData cachedChainTip = this.latestBlocks.peekLast();
@@ -942,7 +1031,7 @@ public class Controller extends Thread {
this.latestBlocks.addLast(latestBlockData);
// Trim if necessary
if (this.latestBlocks.size() >= BLOCK_CACHE_SIZE)
if (this.latestBlocks.size() >= blockCacheSize)
this.latestBlocks.pollFirst();
} else {
if (cachedChainTip != null)
@@ -1150,14 +1239,15 @@ public class Controller extends Thread {
ByteArray signatureAsByteArray = new ByteArray(signature);
BlockMessage cachedBlockMessage = this.blockMessageCache.get(signatureAsByteArray);
CachedBlockMessage cachedBlockMessage = this.blockMessageCache.get(signatureAsByteArray);
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
// Check cached latest block message
if (cachedBlockMessage != null) {
this.stats.getBlockMessageStats.cacheHits.incrementAndGet();
// We need to duplicate it to prevent multiple threads setting ID on the same message
BlockMessage clonedBlockMessage = cachedBlockMessage.cloneWithNewId(message.getId());
CachedBlockMessage clonedBlockMessage = cachedBlockMessage.cloneWithNewId(message.getId());
if (!peer.sendMessage(clonedBlockMessage))
peer.disconnect("failed to send block");
@@ -1185,15 +1275,18 @@ public class Controller extends Thread {
Block block = new Block(repository, blockData);
BlockMessage blockMessage = new BlockMessage(block);
CachedBlockMessage blockMessage = new CachedBlockMessage(block);
blockMessage.setId(message.getId());
// This call also causes the other needed data to be pulled in from repository
if (!peer.sendMessage(blockMessage))
if (!peer.sendMessage(blockMessage)) {
peer.disconnect("failed to send block");
// Don't fall-through to caching because failure to send might be from failure to build message
return;
}
// If request is for a recent block, cache it
if (getChainHeight() - blockData.getHeight() <= BLOCK_CACHE_SIZE) {
if (getChainHeight() - blockData.getHeight() <= blockCacheSize) {
this.stats.getBlockMessageStats.cacheFills.incrementAndGet();
this.blockMessageCache.put(new ByteArray(blockData.getSignature()), blockMessage);
@@ -1207,6 +1300,18 @@ public class Controller extends Thread {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
/*
* If we can't obtain blockchain lock immediately,
* e.g. Synchronizer is active, or another transaction is taking a while to validate,
* then we're using up a network thread for ages and clogging things up
* so bail out early
*/
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock()) {
LOGGER.trace(() -> String.format("Too busy to import %s transaction %s from peer %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature()), peer));
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
Transaction transaction = Transaction.fromData(repository, transactionData);
@@ -1236,6 +1341,8 @@ public class Controller extends Thread {
LOGGER.debug(() -> String.format("Imported %s transaction %s from peer %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature()), peer));
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing transaction %s from peer %s", Base58.encode(transactionData.getSignature()), peer), e);
} finally {
blockchainLock.unlock();
}
}

View File

@@ -8,6 +8,7 @@ import java.util.Arrays;
import java.util.List;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
import java.util.Iterator;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -15,8 +16,10 @@ import org.qortal.account.Account;
import org.qortal.account.PublicKeyAccount;
import org.qortal.block.Block;
import org.qortal.block.Block.ValidationResult;
import org.qortal.block.BlockChain;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.CommonBlockData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.transaction.RewardShareTransactionData;
import org.qortal.data.transaction.TransactionData;
@@ -32,17 +35,29 @@ import org.qortal.network.message.Message.MessageType;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.transaction.Transaction;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
public class Synchronizer {
private static final Logger LOGGER = LogManager.getLogger(Synchronizer.class);
/** Max number of new blocks we aim to add to chain tip in each sync round */
private static final int SYNC_BATCH_SIZE = 200; // XXX move to Settings?
/** Initial jump back of block height when searching for common block with peer */
private static final int INITIAL_BLOCK_STEP = 8;
private static final int MAXIMUM_BLOCK_STEP = 500;
/** Maximum jump back of block height when searching for common block with peer */
private static final int MAXIMUM_BLOCK_STEP = 128;
/** Maximum difference in block height between tip and peer's common block before peer is considered TOO DIVERGENT */
private static final int MAXIMUM_COMMON_DELTA = 240; // XXX move to Settings?
private static final int SYNC_BATCH_SIZE = 200;
/** Maximum number of block signatures we ask from peer in one go */
private static final int MAXIMUM_REQUEST_SIZE = 200; // XXX move to Settings?
private static Synchronizer instance;
@@ -62,6 +77,406 @@ public class Synchronizer {
return instance;
}
/**
* Iterate through a list of supplied peers, and attempt to find our common block with each.
* If a common block is found, its summary will be retained in the peer's commonBlockSummary property, for processing later.
* <p>
* Will return <tt>SynchronizationResult.OK</tt> on success.
* <p>
* @param peers
* @return SynchronizationResult.OK if the process completed successfully, or a different SynchronizationResult if something went wrong.
* @throws InterruptedException
*/
public SynchronizationResult findCommonBlocksWithPeers(List<Peer> peers) throws InterruptedException {
try (final Repository repository = RepositoryManager.getRepository()) {
try {
if (peers.size() == 0)
return SynchronizationResult.NOTHING_TO_DO;
// If our latest block is very old, it's best that we don't try and determine the best peers to sync to.
// This is because it can involve very large chain comparisons, which is too intensive.
// In reality, most forking problems occur near the chain tips, so we will reserve this functionality for those situations.
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return SynchronizationResult.REPOSITORY_ISSUE;
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
LOGGER.debug(String.format("Our latest block is very old, so we won't collect common block info from peers"));
return SynchronizationResult.NOTHING_TO_DO;
}
LOGGER.debug(String.format("Searching for common blocks with %d peers...", peers.size()));
final long startTime = System.currentTimeMillis();
int commonBlocksFound = 0;
boolean wereNewRequestsMade = false;
for (Peer peer : peers) {
// Are we shutting down?
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
// Check if we can use the cached common block data, by comparing the peer's current chain tip against the peer's chain tip when we last found our common block
if (peer.canUseCachedCommonBlockData()) {
LOGGER.debug(String.format("Skipping peer %s because we already have the latest common block data in our cache. Cached common block sig is %.08s", peer, Base58.encode(peer.getCommonBlockData().getCommonBlockSummary().getSignature())));
commonBlocksFound++;
continue;
}
// Cached data is stale, so clear it and repopulate
peer.setCommonBlockData(null);
// Search for the common block
Synchronizer.getInstance().findCommonBlockWithPeer(peer, repository);
if (peer.getCommonBlockData() != null)
commonBlocksFound++;
// This round wasn't served entirely from the cache, so we may want to log the results
wereNewRequestsMade = true;
}
if (wereNewRequestsMade) {
final long totalTimeTaken = System.currentTimeMillis() - startTime;
LOGGER.info(String.format("Finished searching for common blocks with %d peer%s. Found: %d. Total time taken: %d ms", peers.size(), (peers.size() != 1 ? "s" : ""), commonBlocksFound, totalTimeTaken));
}
return SynchronizationResult.OK;
} finally {
repository.discardChanges(); // Free repository locks, if any, also in case anything went wrong
}
} catch (DataException e) {
LOGGER.error("Repository issue during synchronization with peer", e);
return SynchronizationResult.REPOSITORY_ISSUE;
}
}
/**
* Attempt to find the find our common block with supplied peer.
* If a common block is found, its summary will be retained in the peer's commonBlockSummary property, for processing later.
* <p>
* Will return <tt>SynchronizationResult.OK</tt> on success.
* <p>
* @param peer
* @param repository
* @return SynchronizationResult.OK if the process completed successfully, or a different SynchronizationResult if something went wrong.
* @throws InterruptedException
*/
public SynchronizationResult findCommonBlockWithPeer(Peer peer, Repository repository) throws InterruptedException {
try {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final int ourInitialHeight = ourLatestBlockData.getHeight();
PeerChainTipData peerChainTipData = peer.getChainTipData();
int peerHeight = peerChainTipData.getLastHeight();
byte[] peersLastBlockSignature = peerChainTipData.getLastBlockSignature();
byte[] ourLastBlockSignature = ourLatestBlockData.getSignature();
LOGGER.debug(String.format("Fetching summaries from peer %s at height %d, sig %.8s, ts %d; our height %d, sig %.8s, ts %d", peer,
peerHeight, Base58.encode(peersLastBlockSignature), peer.getChainTipData().getLastBlockTimestamp(),
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp()));
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, false, peerBlockSummaries, false);
if (findCommonBlockResult != SynchronizationResult.OK) {
// Logging performed by fetchSummariesFromCommonBlock() above
peer.setCommonBlockData(null);
return findCommonBlockResult;
}
// First summary is common block
final BlockData commonBlockData = repository.getBlockRepository().fromSignature(peerBlockSummaries.get(0).getSignature());
final BlockSummaryData commonBlockSummary = new BlockSummaryData(commonBlockData);
final int commonBlockHeight = commonBlockData.getHeight();
final byte[] commonBlockSig = commonBlockData.getSignature();
final String commonBlockSig58 = Base58.encode(commonBlockSig);
LOGGER.debug(String.format("Common block with peer %s is at height %d, sig %.8s, ts %d", peer,
commonBlockHeight, commonBlockSig58, commonBlockData.getTimestamp()));
peerBlockSummaries.remove(0);
// Store the common block summary against the peer, and the current chain tip (for caching)
peer.setCommonBlockData(new CommonBlockData(commonBlockSummary, peerChainTipData));
return SynchronizationResult.OK;
} catch (DataException e) {
LOGGER.error("Repository issue during synchronization with peer", e);
return SynchronizationResult.REPOSITORY_ISSUE;
}
}
/**
* Compare a list of peers to determine the best peer(s) to sync to next.
* <p>
* Will return a filtered list of peers on success, or an identical list of peers on failure.
* This allows us to fall back to legacy behaviour (random selection from the entire list of peers), if we are unable to make the comparison.
* <p>
* @param peers
* @return a list of peers, possibly filtered.
* @throws InterruptedException
*/
public List<Peer> comparePeers(List<Peer> peers) throws InterruptedException {
try (final Repository repository = RepositoryManager.getRepository()) {
try {
// If our latest block is very old, it's best that we don't try and determine the best peers to sync to.
// This is because it can involve very large chain comparisons, which is too intensive.
// In reality, most forking problems occur near the chain tips, so we will reserve this functionality for those situations.
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return peers;
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
LOGGER.debug(String.format("Our latest block is very old, so we won't filter the peers list"));
return peers;
}
// We will switch to a new chain weight consensus algorithm at a hard fork, so determine if this has happened yet
boolean usingSameLengthChainWeight = (NTP.getTime() >= BlockChain.getInstance().getCalcChainWeightTimestamp());
LOGGER.debug(String.format("Using %s chain weight consensus algorithm", (usingSameLengthChainWeight ? "same-length" : "variable-length")));
// Retrieve a list of unique common blocks from this list of peers
List<BlockSummaryData> commonBlocks = this.uniqueCommonBlocks(peers);
// Order common blocks by height, in ascending order
// This is essential for the logic below to make the correct decisions when discarding chains - do not remove
commonBlocks.sort((b1, b2) -> Integer.valueOf(b1.getHeight()).compareTo(Integer.valueOf(b2.getHeight())));
// Get our latest height
final int ourHeight = ourLatestBlockData.getHeight();
// Create a placeholder to track of common blocks that we can discard due to being inferior chains
int dropPeersAfterCommonBlockHeight = 0;
// Remove peers with no common block data
Iterator iterator = peers.iterator();
while (iterator.hasNext()) {
Peer peer = (Peer) iterator.next();
if (peer.getCommonBlockData() == null) {
LOGGER.debug(String.format("Removed peer %s because it has no common block data", peer));
iterator.remove();
}
}
// Loop through each group of common blocks
for (BlockSummaryData commonBlockSummary : commonBlocks) {
List<Peer> peersSharingCommonBlock = peers.stream().filter(peer -> peer.getCommonBlockData().getCommonBlockSummary().equals(commonBlockSummary)).collect(Collectors.toList());
// Check if we need to discard this group of peers
if (dropPeersAfterCommonBlockHeight > 0) {
if (commonBlockSummary.getHeight() > dropPeersAfterCommonBlockHeight) {
// We have already determined that the correct chain diverged from a lower height. We are safe to skip these peers.
for (Peer peer : peersSharingCommonBlock) {
LOGGER.debug(String.format("Peer %s has common block at height %d but the superior chain is at height %d. Removing it from this round.", peer, commonBlockSummary.getHeight(), dropPeersAfterCommonBlockHeight));
Controller.getInstance().addInferiorChainSignature(peer.getChainTipData().getLastBlockSignature());
}
continue;
}
}
// Calculate the length of the shortest peer chain sharing this common block, including our chain
final int ourAdditionalBlocksAfterCommonBlock = ourHeight - commonBlockSummary.getHeight();
int minChainLength = this.calculateMinChainLengthOfPeers(peersSharingCommonBlock, commonBlockSummary);
// Fetch block summaries from each peer
for (Peer peer : peersSharingCommonBlock) {
// If we're shutting down, just return the latest peer list
if (Controller.isStopping())
return peers;
// Count the number of blocks this peer has beyond our common block
final PeerChainTipData peerChainTipData = peer.getChainTipData();
final int peerHeight = peerChainTipData.getLastHeight();
final byte[] peerLastBlockSignature = peerChainTipData.getLastBlockSignature();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
// Limit the number of blocks we are comparing. FUTURE: we could request more in batches, but there may not be a case when this is needed
int summariesRequired = Math.min(peerAdditionalBlocksAfterCommonBlock, MAXIMUM_REQUEST_SIZE);
// Check if we can use the cached common block summaries, by comparing the peer's current chain tip against the peer's chain tip when we last found our common block
boolean useCachedSummaries = false;
if (peer.canUseCachedCommonBlockData()) {
if (peer.getCommonBlockData().getBlockSummariesAfterCommonBlock() != null) {
if (peer.getCommonBlockData().getBlockSummariesAfterCommonBlock().size() == summariesRequired) {
LOGGER.trace(String.format("Using cached block summaries for peer %s", peer));
useCachedSummaries = true;
}
}
}
if (useCachedSummaries == false) {
if (summariesRequired > 0) {
LOGGER.trace(String.format("Requesting %d block summar%s from peer %s after common block %.8s. Peer height: %d", summariesRequired, (summariesRequired != 1 ? "ies" : "y"), peer, Base58.encode(commonBlockSummary.getSignature()), peerHeight));
// Forget any cached summaries
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(null);
// Request new block summaries
List<BlockSummaryData> blockSummaries = this.getBlockSummaries(peer, commonBlockSummary.getSignature(), summariesRequired);
if (blockSummaries != null) {
LOGGER.trace(String.format("Peer %s returned %d block summar%s", peer, blockSummaries.size(), (blockSummaries.size() != 1 ? "ies" : "y")));
if (blockSummaries.size() < summariesRequired)
// This could mean that the peer has re-orged. Exclude this peer until they return the summaries we expect.
LOGGER.debug(String.format("Peer %s returned %d block summar%s instead of expected %d - excluding them from this round", peer, blockSummaries.size(), (blockSummaries.size() != 1 ? "ies" : "y"), summariesRequired));
else if (blockSummaryWithSignature(peerLastBlockSignature, blockSummaries) == null)
// We don't have a block summary for the peer's reported chain tip, so should exclude it
LOGGER.debug(String.format("Peer %s didn't return a block summary with signature %.8s - excluding them from this round", peer, Base58.encode(peerLastBlockSignature)));
else
// All looks good, so store the retrieved block summaries in the peer's cache
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(blockSummaries);
}
} else {
// There are no block summaries after this common block
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(null);
}
}
// Reduce minChainLength if needed. If we don't have any blocks, this peer will be excluded from chain weight comparisons later in the process, so we shouldn't update minChainLength
List <BlockSummaryData> peerBlockSummaries = peer.getCommonBlockData().getBlockSummariesAfterCommonBlock();
if (peerBlockSummaries != null && peerBlockSummaries.size() > 0)
if (peerBlockSummaries.size() < minChainLength)
minChainLength = peerBlockSummaries.size();
}
// Fetch our corresponding block summaries. Limit to MAXIMUM_REQUEST_SIZE, in order to make the comparison fairer, as peers have been limited too
final int ourSummariesRequired = Math.min(ourAdditionalBlocksAfterCommonBlock, MAXIMUM_REQUEST_SIZE);
LOGGER.trace(String.format("About to fetch our block summaries from %d to %d. Our height: %d", commonBlockSummary.getHeight() + 1, commonBlockSummary.getHeight() + ourSummariesRequired, ourHeight));
List<BlockSummaryData> ourBlockSummaries = repository.getBlockRepository().getBlockSummaries(commonBlockSummary.getHeight() + 1, commonBlockSummary.getHeight() + ourSummariesRequired);
if (ourBlockSummaries.isEmpty()) {
LOGGER.debug(String.format("We don't have any block summaries so can't compare our chain against peers with this common block. We can still compare them against each other."));
}
else {
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
// Reduce minChainLength if we have less summaries
if (ourBlockSummaries.size() < minChainLength)
minChainLength = ourBlockSummaries.size();
}
// Create array to hold peers for comparison
List<Peer> superiorPeersForComparison = new ArrayList<>();
// Calculate max height for chain weight comparisons
int maxHeightForChainWeightComparisons = commonBlockSummary.getHeight() + minChainLength;
// Calculate our chain weight
BigInteger ourChainWeight = BigInteger.valueOf(0);
if (ourBlockSummaries.size() > 0)
ourChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), ourBlockSummaries, maxHeightForChainWeightComparisons);
NumberFormat formatter = new DecimalFormat("0.###E0");
NumberFormat accurateFormatter = new DecimalFormat("0.################E0");
LOGGER.debug(String.format("Our chain weight based on %d blocks is %s", (usingSameLengthChainWeight ? minChainLength : ourBlockSummaries.size()), formatter.format(ourChainWeight)));
LOGGER.debug(String.format("Listing peers with common block %.8s...", Base58.encode(commonBlockSummary.getSignature())));
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
final CommonBlockData peerCommonBlockData = peer.getCommonBlockData();
if (peerCommonBlockData == null || peerCommonBlockData.getBlockSummariesAfterCommonBlock() == null || peerCommonBlockData.getBlockSummariesAfterCommonBlock().isEmpty()) {
// No response - remove this peer for now
LOGGER.debug(String.format("Peer %s doesn't have any block summaries - removing it from this round", peer));
peers.remove(peer);
continue;
}
final List<BlockSummaryData> peerBlockSummariesAfterCommonBlock = peerCommonBlockData.getBlockSummariesAfterCommonBlock();
populateBlockSummariesMinterLevels(repository, peerBlockSummariesAfterCommonBlock);
// Calculate cumulative chain weight of this blockchain subset, from common block to highest mutual block held by all peers in this group.
LOGGER.debug(String.format("About to calculate chain weight based on %d blocks for peer %s with common block %.8s (peer has %d blocks after common block)", (usingSameLengthChainWeight ? minChainLength : peerBlockSummariesAfterCommonBlock.size()), peer, Base58.encode(commonBlockSummary.getSignature()), peerAdditionalBlocksAfterCommonBlock));
BigInteger peerChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), peerBlockSummariesAfterCommonBlock, maxHeightForChainWeightComparisons);
peer.getCommonBlockData().setChainWeight(peerChainWeight);
LOGGER.debug(String.format("Chain weight of peer %s based on %d blocks (%d - %d) is %s", peer, (usingSameLengthChainWeight ? minChainLength : peerBlockSummariesAfterCommonBlock.size()), peerBlockSummariesAfterCommonBlock.get(0).getHeight(), peerBlockSummariesAfterCommonBlock.get(peerBlockSummariesAfterCommonBlock.size()-1).getHeight(), formatter.format(peerChainWeight)));
// Compare against our chain - if our blockchain has greater weight then don't synchronize with peer (or any others in this group)
if (ourChainWeight.compareTo(peerChainWeight) > 0) {
// This peer is on an inferior chain - remove it
LOGGER.debug(String.format("Peer %s is on an inferior chain to us - removing it from this round", peer));
peers.remove(peer);
}
else {
// Our chain is inferior or equal
LOGGER.debug(String.format("Peer %s is on an equal or better chain to us. We will compare the other peers sharing this common block against each other, and drop all peers sharing higher common blocks.", peer));
dropPeersAfterCommonBlockHeight = commonBlockSummary.getHeight();
superiorPeersForComparison.add(peer);
}
}
// Now that we have selected the best peers, compare them against each other and remove any with lower weights
if (superiorPeersForComparison.size() > 0) {
BigInteger bestChainWeight = null;
for (Peer peer : superiorPeersForComparison) {
// Increase bestChainWeight if needed
if (bestChainWeight == null || peer.getCommonBlockData().getChainWeight().compareTo(bestChainWeight) >= 0)
bestChainWeight = peer.getCommonBlockData().getChainWeight();
}
for (Peer peer : superiorPeersForComparison) {
// Check if we should discard an inferior peer
if (peer.getCommonBlockData().getChainWeight().compareTo(bestChainWeight) < 0) {
BigInteger difference = bestChainWeight.subtract(peer.getCommonBlockData().getChainWeight());
LOGGER.debug(String.format("Peer %s has a lower chain weight (difference: %s) than other peer(s) in this group - removing it from this round.", peer, accurateFormatter.format(difference)));
peers.remove(peer);
}
}
// FUTURE: we may want to prefer peers with additional blocks, and compare the additional blocks against each other.
// This would fast track us to the best candidate for the latest block.
// Right now, peers with the exact same chain as us are treated equally to those with an additional block.
}
}
return peers;
} finally {
repository.discardChanges(); // Free repository locks, if any, also in case anything went wrong
}
} catch (DataException e) {
LOGGER.error("Repository issue during peer comparison", e);
return peers;
}
}
private List<BlockSummaryData> uniqueCommonBlocks(List<Peer> peers) {
List<BlockSummaryData> commonBlocks = new ArrayList<>();
for (Peer peer : peers) {
if (peer.getCommonBlockData() != null && peer.getCommonBlockData().getCommonBlockSummary() != null) {
LOGGER.trace(String.format("Peer %s has common block %.8s", peer, Base58.encode(peer.getCommonBlockData().getCommonBlockSummary().getSignature())));
BlockSummaryData commonBlockSummary = peer.getCommonBlockData().getCommonBlockSummary();
if (!commonBlocks.contains(commonBlockSummary))
commonBlocks.add(commonBlockSummary);
}
else {
LOGGER.trace(String.format("Peer %s has no common block data. Skipping...", peer));
}
}
return commonBlocks;
}
private int calculateMinChainLengthOfPeers(List<Peer> peersSharingCommonBlock, BlockSummaryData commonBlockSummary) {
// Calculate the length of the shortest peer chain sharing this common block
int minChainLength = 0;
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
if (peerAdditionalBlocksAfterCommonBlock < minChainLength || minChainLength == 0)
minChainLength = peerAdditionalBlocksAfterCommonBlock;
}
return minChainLength;
}
private BlockSummaryData blockSummaryWithSignature(byte[] signature, List<BlockSummaryData> blockSummaries) {
if (blockSummaries != null)
return blockSummaries.stream().filter(blockSummary -> Arrays.equals(blockSummary.getSignature(), signature)).findAny().orElse(null);
return null;
}
/**
* Attempt to synchronize blockchain with peer.
* <p>
@@ -96,10 +511,13 @@ public class Synchronizer {
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp()));
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, force, peerBlockSummaries);
if (findCommonBlockResult != SynchronizationResult.OK)
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, force, peerBlockSummaries, true);
if (findCommonBlockResult != SynchronizationResult.OK) {
// Logging performed by fetchSummariesFromCommonBlock() above
// Clear our common block cache for this peer
peer.setCommonBlockData(null);
return findCommonBlockResult;
}
// First summary is common block
final BlockData commonBlockData = repository.getBlockRepository().fromSignature(peerBlockSummaries.get(0).getSignature());
@@ -175,7 +593,7 @@ public class Synchronizer {
* @throws DataException
* @throws InterruptedException
*/
public SynchronizationResult fetchSummariesFromCommonBlock(Repository repository, Peer peer, int ourHeight, boolean force, List<BlockSummaryData> blockSummariesFromCommon) throws DataException, InterruptedException {
public SynchronizationResult fetchSummariesFromCommonBlock(Repository repository, Peer peer, int ourHeight, boolean force, List<BlockSummaryData> blockSummariesFromCommon, boolean infoLogWhenNotFound) throws DataException, InterruptedException {
// Start by asking for a few recent block hashes as this will cover a majority of reorgs
// Failing that, back off exponentially
int step = INITIAL_BLOCK_STEP;
@@ -204,8 +622,12 @@ public class Synchronizer {
blockSummariesBatch = this.getBlockSummaries(peer, testSignature, step);
if (blockSummariesBatch == null) {
if (infoLogWhenNotFound)
LOGGER.info(String.format("Error while trying to find common block with peer %s", peer));
else
LOGGER.debug(String.format("Error while trying to find common block with peer %s", peer));
// No response - give up this time
LOGGER.info(String.format("Error while trying to find common block with peer %s", peer));
return SynchronizationResult.NO_REPLY;
}
@@ -244,9 +666,13 @@ public class Synchronizer {
// Currently we work forward from common block until we hit a block we don't have
// TODO: rewrite as modified binary search!
int i;
for (i = 1; i < blockSummariesFromCommon.size(); ++i)
for (i = 1; i < blockSummariesFromCommon.size(); ++i) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
if (!repository.getBlockRepository().exists(blockSummariesFromCommon.get(i).getSignature()))
break;
}
// Note: index i - 1 isn't cleared: List.subList is fromIndex inclusive to toIndex exclusive
blockSummariesFromCommon.subList(0, i - 1).clear();
@@ -295,6 +721,9 @@ public class Synchronizer {
// Check peer sent valid heights
for (int i = 0; i < moreBlockSummaries.size(); ++i) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
++lastSummaryHeight;
BlockSummaryData blockSummary = moreBlockSummaries.get(i);
@@ -316,7 +745,7 @@ public class Synchronizer {
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
populateBlockSummariesMinterLevels(repository, peerBlockSummaries);
final int mutualHeight = commonBlockHeight - 1 + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
final int mutualHeight = commonBlockHeight + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
// Calculate cumulative chain weights of both blockchain subsets, from common block to highest mutual block.
BigInteger ourChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, ourBlockSummaries, mutualHeight);
@@ -341,52 +770,142 @@ public class Synchronizer {
final byte[] commonBlockSig = commonBlockData.getSignature();
String commonBlockSig58 = Base58.encode(commonBlockSig);
byte[] latestPeerSignature = commonBlockSig;
int height = commonBlockHeight;
LOGGER.debug(() -> String.format("Fetching peer %s chain from height %d, sig %.8s", peer, commonBlockHeight, commonBlockSig58));
int ourHeight = ourInitialHeight;
final int maxRetries = Settings.getInstance().getMaxRetries();
// Overall plan: fetch peer's blocks first, then orphan, then apply
// Convert any leftover (post-common) block summaries into signatures to request from peer
List<byte[]> peerBlockSignatures = peerBlockSummaries.stream().map(BlockSummaryData::getSignature).collect(Collectors.toList());
// Fetch remaining block signatures, if needed
int numberSignaturesRequired = peerBlockSignatures.size() - (peerHeight - commonBlockHeight);
if (numberSignaturesRequired > 0) {
byte[] latestPeerSignature = peerBlockSignatures.isEmpty() ? commonBlockSig : peerBlockSignatures.get(peerBlockSignatures.size() - 1);
LOGGER.trace(String.format("Requesting %d signature%s after height %d, sig %.8s",
numberSignaturesRequired, (numberSignaturesRequired != 1 ? "s": ""), ourHeight, Base58.encode(latestPeerSignature)));
List<byte[]> moreBlockSignatures = this.getBlockSignatures(peer, latestPeerSignature, numberSignaturesRequired);
if (moreBlockSignatures == null || moreBlockSignatures.isEmpty()) {
LOGGER.info(String.format("Peer %s failed to respond with more block signatures after height %d, sig %.8s", peer,
ourHeight, Base58.encode(latestPeerSignature)));
return SynchronizationResult.NO_REPLY;
}
LOGGER.trace(String.format("Received %s signature%s", peerBlockSignatures.size(), (peerBlockSignatures.size() != 1 ? "s" : "")));
peerBlockSignatures.addAll(moreBlockSignatures);
}
// Fetch blocks using signatures
LOGGER.debug(String.format("Fetching new blocks from peer %s", peer));
// Keep a list of blocks received so far
List<Block> peerBlocks = new ArrayList<>();
for (byte[] blockSignature : peerBlockSignatures) {
Block newBlock = this.fetchBlock(repository, peer, blockSignature);
// Calculate the total number of additional blocks this peer has beyond the common block
int additionalPeerBlocksAfterCommonBlock = peerHeight - commonBlockHeight;
// Subtract the number of signatures that we already have, as we don't need to request them again
int numberSignaturesRequired = additionalPeerBlocksAfterCommonBlock - peerBlockSignatures.size();
int retryCount = 0;
while (height < peerHeight) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
// Ensure we don't request more than MAXIMUM_REQUEST_SIZE
int numberRequested = Math.min(numberSignaturesRequired, MAXIMUM_REQUEST_SIZE);
// Do we need more signatures?
if (peerBlockSignatures.isEmpty() && numberRequested > 0) {
LOGGER.trace(String.format("Requesting %d signature%s after height %d, sig %.8s",
numberRequested, (numberRequested != 1 ? "s" : ""), height, Base58.encode(latestPeerSignature)));
peerBlockSignatures = this.getBlockSignatures(peer, latestPeerSignature, numberRequested);
if (peerBlockSignatures == null || peerBlockSignatures.isEmpty()) {
LOGGER.info(String.format("Peer %s failed to respond with more block signatures after height %d, sig %.8s", peer,
height, Base58.encode(latestPeerSignature)));
// Clear our cache of common block summaries for this peer, as they are likely to be invalid
CommonBlockData cachedCommonBlockData = peer.getCommonBlockData();
if (cachedCommonBlockData != null)
cachedCommonBlockData.setBlockSummariesAfterCommonBlock(null);
// If we have already received newer blocks from this peer that what we have already, go ahead and apply them
if (peerBlocks.size() > 0) {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final Block peerLatestBlock = peerBlocks.get(peerBlocks.size() - 1);
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (ourLatestBlockData != null && peerLatestBlock != null && minLatestBlockTimestamp != null) {
// If our latest block is very old....
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
// ... and we have received a block that is more recent than our latest block ...
if (peerLatestBlock.getBlockData().getTimestamp() > ourLatestBlockData.getTimestamp()) {
// ... then apply the blocks, as it takes us a step forward.
// This is particularly useful when starting up a node that was on a small fork when it was last shut down.
// In these cases, we now allow the node to sync forward, and get onto the main chain again.
// Without this, we would require that the node syncs ENTIRELY with this peer,
// and any problems downloading a block would cause all progress to be lost.
LOGGER.debug(String.format("Newly received blocks are %d ms newer than our latest block - so we will apply them", peerLatestBlock.getBlockData().getTimestamp() - ourLatestBlockData.getTimestamp()));
break;
}
}
}
}
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated or incomplete state
return SynchronizationResult.NO_REPLY;
}
numberSignaturesRequired = peerHeight - height - peerBlockSignatures.size();
LOGGER.trace(String.format("Received %s signature%s", peerBlockSignatures.size(), (peerBlockSignatures.size() != 1 ? "s" : "")));
}
if (peerBlockSignatures.isEmpty()) {
LOGGER.trace(String.format("No more signatures or blocks to request from peer %s", peer));
break;
}
byte[] nextPeerSignature = peerBlockSignatures.get(0);
int nextHeight = height + 1;
LOGGER.trace(String.format("Fetching block %d, sig %.8s from %s", nextHeight, Base58.encode(nextPeerSignature), peer));
Block newBlock = this.fetchBlock(repository, peer, nextPeerSignature);
if (newBlock == null) {
LOGGER.info(String.format("Peer %s failed to respond with block for height %d, sig %.8s", peer,
ourHeight, Base58.encode(blockSignature)));
return SynchronizationResult.NO_REPLY;
nextHeight, Base58.encode(nextPeerSignature)));
if (retryCount >= maxRetries) {
// If we have already received newer blocks from this peer that what we have already, go ahead and apply them
if (peerBlocks.size() > 0) {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final Block peerLatestBlock = peerBlocks.get(peerBlocks.size() - 1);
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (ourLatestBlockData != null && peerLatestBlock != null && minLatestBlockTimestamp != null) {
// If our latest block is very old....
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
// ... and we have received a block that is more recent than our latest block ...
if (peerLatestBlock.getBlockData().getTimestamp() > ourLatestBlockData.getTimestamp()) {
// ... then apply the blocks, as it takes us a step forward.
// This is particularly useful when starting up a node that was on a small fork when it was last shut down.
// In these cases, we now allow the node to sync forward, and get onto the main chain again.
// Without this, we would require that the node syncs ENTIRELY with this peer,
// and any problems downloading a block would cause all progress to be lost.
LOGGER.debug(String.format("Newly received blocks are %d ms newer than our latest block - so we will apply them", peerLatestBlock.getBlockData().getTimestamp() - ourLatestBlockData.getTimestamp()));
break;
}
}
}
}
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated or incomplete state
return SynchronizationResult.NO_REPLY;
} else {
// Re-fetch signatures, in case the peer is now on a different fork
peerBlockSignatures.clear();
numberSignaturesRequired = peerHeight - height;
// Retry until retryCount reaches maxRetries
retryCount++;
int triesRemaining = maxRetries - retryCount;
LOGGER.info(String.format("Re-issuing request to peer %s (%d attempt%s remaining)", peer, triesRemaining, (triesRemaining != 1 ? "s" : "")));
continue;
}
}
// Reset retryCount because the last request succeeded
retryCount = 0;
LOGGER.trace(String.format("Fetched block %d, sig %.8s from %s", nextHeight, Base58.encode(latestPeerSignature), peer));
if (!newBlock.isSignatureValid()) {
LOGGER.info(String.format("Peer %s sent block with invalid signature for height %d, sig %.8s", peer,
ourHeight, Base58.encode(blockSignature)));
nextHeight, Base58.encode(latestPeerSignature)));
return SynchronizationResult.INVALID_DATA;
}
@@ -395,12 +914,18 @@ public class Synchronizer {
transaction.setInitialApprovalStatus();
peerBlocks.add(newBlock);
// Now that we've received this block, we can increase our height and move on to the next one
latestPeerSignature = nextPeerSignature;
peerBlockSignatures.remove(0);
++height;
}
// Unwind to common block (unless common block is our latest block)
LOGGER.debug(String.format("Orphaning blocks back to common block height %d, sig %.8s", commonBlockHeight, commonBlockSig58));
int ourHeight = ourInitialHeight;
LOGGER.debug(String.format("Orphaning blocks back to common block height %d, sig %.8s. Our height: %d", commonBlockHeight, commonBlockSig58, ourHeight));
BlockData orphanBlockData = repository.getBlockRepository().fromHeight(ourHeight);
BlockData orphanBlockData = repository.getBlockRepository().fromHeight(ourInitialHeight);
while (ourHeight > commonBlockHeight) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
@@ -422,10 +947,13 @@ public class Synchronizer {
LOGGER.debug(String.format("Orphaned blocks back to height %d, sig %.8s - applying new blocks from peer %s", commonBlockHeight, commonBlockSig58, peer));
for (Block newBlock : peerBlocks) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
ValidationResult blockResult = newBlock.isValid();
if (blockResult != ValidationResult.OK) {
LOGGER.info(String.format("Peer %s sent invalid block for height %d, sig %.8s: %s", peer,
ourHeight, Base58.encode(newBlock.getSignature()), blockResult.name()));
newBlock.getBlockData().getHeight(), Base58.encode(newBlock.getSignature()), blockResult.name()));
return SynchronizationResult.INVALID_DATA;
}
@@ -469,7 +997,8 @@ public class Synchronizer {
// Do we need more signatures?
if (peerBlockSignatures.isEmpty()) {
int numberRequested = maxBatchHeight - ourHeight;
int numberRequested = Math.min(maxBatchHeight - ourHeight, MAXIMUM_REQUEST_SIZE);
LOGGER.trace(String.format("Requesting %d signature%s after height %d, sig %.8s",
numberRequested, (numberRequested != 1 ? "s": ""), ourHeight, Base58.encode(latestPeerSignature)));
@@ -573,6 +1102,9 @@ public class Synchronizer {
final int firstBlockHeight = blockSummaries.get(0).getHeight();
for (int i = 0; i < blockSummaries.size(); ++i) {
if (Controller.isStopping())
return;
BlockSummaryData blockSummary = blockSummaries.get(i);
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level

View File

@@ -23,7 +23,7 @@ public interface AcctTradeBot {
public ResponseResult startResponse(Repository repository, ATData atData, ACCT acct,
CrossChainTradeData crossChainTradeData, String foreignKey, String receivingAddress) throws DataException;
public boolean canDelete(Repository repository, TradeBotData tradeBotData);
public boolean canDelete(Repository repository, TradeBotData tradeBotData) throws DataException;
public void progress(Repository repository, TradeBotData tradeBotData) throws DataException, ForeignBlockchainException;

View File

@@ -345,11 +345,15 @@ public class BitcoinACCTv1TradeBot implements AcctTradeBot {
}
@Override
public boolean canDelete(Repository repository, TradeBotData tradeBotData) {
public boolean canDelete(Repository repository, TradeBotData tradeBotData) throws DataException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null)
return true;
// If the AT doesn't exist then we might as well let the user tidy up
if (!repository.getATRepository().exists(tradeBotData.getAtAddress()))
return true;
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
case ALICE_DONE:
@@ -378,7 +382,16 @@ public class BitcoinACCTv1TradeBot implements AcctTradeBot {
// Attempt to fetch AT data
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.warn(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
}
return;
}

View File

@@ -211,6 +211,9 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Built AT %s. Waiting for deployment", atAddress));
// Attempt to backup the trade bot data
TradeBot.backupTradeBotData(repository);
// Return to user for signing and broadcast as we don't have their Qortal private key
try {
return DeployAtTransactionTransformer.toBytes(deployAtTransactionData);
@@ -283,6 +286,9 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
tradeForeignPublicKey, tradeForeignPublicKeyHash,
crossChainTradeData.expectedForeignAmount, xprv58, null, lockTimeA, receivingPublicKeyHash);
// Attempt to backup the trade bot data
TradeBot.backupTradeBotData(repository);
// Check we have enough funds via xprv58 to fund P2SH to cover expectedForeignAmount
long p2shFee;
try {
@@ -343,11 +349,15 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
}
@Override
public boolean canDelete(Repository repository, TradeBotData tradeBotData) {
public boolean canDelete(Repository repository, TradeBotData tradeBotData) throws DataException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null)
return true;
// If the AT doesn't exist then we might as well let the user tidy up
if (!repository.getATRepository().exists(tradeBotData.getAtAddress()))
return true;
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
case ALICE_DONE:
@@ -376,7 +386,16 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
// Attempt to fetch AT data
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.warn(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
}
return;
}

View File

@@ -7,6 +7,7 @@ import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.locks.ReentrantLock;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -267,6 +268,16 @@ public class TradeBot implements Listener {
return secret;
}
/*package*/ static void backupTradeBotData(Repository repository) {
// Attempt to backup the trade bot data. This an optional step and doesn't impact trading, so don't throw an exception on failure
try {
LOGGER.info("About to backup trade bot data...");
repository.exportNodeLocalData();
} catch (DataException e) {
LOGGER.info(String.format("Repository issue when exporting trade bot data: %s", e.getMessage()));
}
}
/** Updates trade-bot entry to new state, with current timestamp, logs message and notifies state-change listeners. */
/*package*/ static void updateTradeBotState(Repository repository, TradeBotData tradeBotData,
String newState, int newStateValue, Supplier<String> logMessageSupplier) throws DataException {

View File

@@ -42,35 +42,32 @@ public class Bitcoin extends Bitcoiny {
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("enode.duckdns.org", Server.ConnectionType.SSL, 50002),
new Server("electrumx.ml", Server.ConnectionType.SSL, 50002),
new Server("electrum.bitkoins.nl", Server.ConnectionType.SSL, 50512),
new Server("btc.electroncash.dk", Server.ConnectionType.SSL, 60002),
new Server("electrumx.electricnewyear.net", Server.ConnectionType.SSL, 50002),
new Server("dxm.no-ip.biz", Server.ConnectionType.TCP, 50001),
new Server("kirsche.emzy.de", Server.ConnectionType.TCP, 50001),
new Server("2AZZARITA.hopto.org", Server.ConnectionType.TCP, 50001),
new Server("xtrum.com", Server.ConnectionType.TCP, 50001),
new Server("electrum.srvmin.network", Server.ConnectionType.TCP, 50001),
new Server("electrumx.alexridevski.net", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.lukechilds.co", Server.ConnectionType.TCP, 50001),
new Server("electrum.poiuty.com", Server.ConnectionType.TCP, 50001),
new Server("horsey.cryptocowboys.net", Server.ConnectionType.TCP, 50001),
new Server("128.0.190.26", Server.ConnectionType.SSL, 50002),
new Server("hodlers.beer", Server.ConnectionType.SSL, 50002),
new Server("electrumx.erbium.eu", Server.ConnectionType.TCP, 50001),
new Server("electrumx.erbium.eu", Server.ConnectionType.SSL, 50002),
new Server("btc.lastingcoin.net", Server.ConnectionType.SSL, 50002),
new Server("electrum.bitaroo.net", Server.ConnectionType.SSL, 50002),
new Server("bitcoin.grey.pw", Server.ConnectionType.SSL, 50002),
new Server("2electrumx.hopto.me", Server.ConnectionType.SSL, 56022),
new Server("185.64.116.15", Server.ConnectionType.SSL, 50002),
new Server("kirsche.emzy.de", Server.ConnectionType.SSL, 50002),
new Server("alviss.coinjoined.com", Server.ConnectionType.SSL, 50002),
new Server("electrum.emzy.de", Server.ConnectionType.SSL, 50002),
new Server("electrum.emzy.de", Server.ConnectionType.TCP, 50001),
new Server("electrum-server.ninja", Server.ConnectionType.TCP, 50081),
new Server("bitcoin.electrumx.multicoin.co", Server.ConnectionType.TCP, 50001),
new Server("esx.geekhosters.com", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.grey.pw", Server.ConnectionType.TCP, 50003),
new Server("exs.ignorelist.com", Server.ConnectionType.TCP, 50001),
new Server("electrum.coinext.com.br", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.aranguren.org", Server.ConnectionType.TCP, 50001),
new Server("skbxmit.coinjoined.com", Server.ConnectionType.TCP, 50001),
new Server("alviss.coinjoined.com", Server.ConnectionType.TCP, 50001),
new Server("electrum2.privateservers.network", Server.ConnectionType.TCP, 50001),
new Server("electrumx.schulzemic.net", Server.ConnectionType.TCP, 50001),
new Server("bitcoins.sk", Server.ConnectionType.TCP, 56001),
new Server("node.mendonca.xyz", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.aranguren.org", Server.ConnectionType.TCP, 50001));
new Server("vmd71287.contaboserver.net", Server.ConnectionType.SSL, 50002),
new Server("btc.litepay.ch", Server.ConnectionType.SSL, 50002),
new Server("electrum.stippy.com", Server.ConnectionType.SSL, 50002),
new Server("xtrum.com", Server.ConnectionType.SSL, 50002),
new Server("electrum.acinq.co", Server.ConnectionType.SSL, 50002),
new Server("electrum2.taborsky.cz", Server.ConnectionType.SSL, 50002),
new Server("vmd63185.contaboserver.net", Server.ConnectionType.SSL, 50002),
new Server("electrum2.privateservers.network", Server.ConnectionType.SSL, 50002),
new Server("electrumx.alexridevski.net", Server.ConnectionType.SSL, 50002),
new Server("192.166.219.200", Server.ConnectionType.SSL, 50002),
new Server("2ex.digitaleveryware.com", Server.ConnectionType.SSL, 50002),
new Server("dxm.no-ip.biz", Server.ConnectionType.SSL, 50002),
new Server("caleb.vegas", Server.ConnectionType.SSL, 50002));
}
@Override
@@ -96,10 +93,8 @@ public class Bitcoin extends Bitcoiny {
@Override
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
new Server("electrum.blockstream.info", Server.ConnectionType.TCP, 60001),
new Server("electrum.blockstream.info", Server.ConnectionType.SSL, 60002),
new Server("tn.not.fyi", Server.ConnectionType.SSL, 55002),
new Server("electrumx-test.1209k.com", Server.ConnectionType.SSL, 50002),
new Server("testnet.qtornado.com", Server.ConnectionType.TCP, 51001),
new Server("testnet.qtornado.com", Server.ConnectionType.SSL, 51002),
new Server("testnet.aranguren.org", Server.ConnectionType.TCP, 51001),
new Server("testnet.aranguren.org", Server.ConnectionType.SSL, 51002),

View File

@@ -91,7 +91,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return this.params;
}
// Interface obligations
// Interface obligations
@Override
public boolean isValidAddress(String address) {
@@ -171,7 +171,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/**
* Returns fixed P2SH spending fee, in sats per 1000bytes, optionally for historic timestamp.
*
*
* @param timestamp optional milliseconds since epoch, or null for 'now'
* @return sats per 1000bytes
* @throws ForeignBlockchainException if something went wrong
@@ -271,7 +271,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/**
* Returns bitcoinj transaction sending <tt>amount</tt> to <tt>recipient</tt>.
*
*
* @param xprv58 BIP32 private key
* @param recipient P2PKH address
* @param amount unscaled amount
@@ -303,7 +303,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/**
* Returns bitcoinj transaction sending <tt>amount</tt> to <tt>recipient</tt> using default fees.
*
*
* @param xprv58 BIP32 private key
* @param recipient P2PKH address
* @param amount unscaled amount
@@ -332,7 +332,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return balance.value;
}
public List<BitcoinyTransaction> getWalletTransactions(String key58) throws ForeignBlockchainException {
public List<SimpleTransaction> getWalletTransactions(String key58) throws ForeignBlockchainException {
Context.propagate(bitcoinjContext);
Wallet wallet = walletFromDeterministicKey58(key58);
@@ -344,6 +344,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
List<DeterministicKey> keys = new ArrayList<>(keyChain.getLeafKeys());
Set<BitcoinyTransaction> walletTransactions = new HashSet<>();
Set<String> keySet = new HashSet<>();
int ki = 0;
do {
@@ -354,6 +355,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Check for transactions
Address address = Address.fromKey(this.params, dKey, ScriptType.P2PKH);
keySet.add(address.toString());
byte[] script = ScriptBuilder.createOutputScript(address).getProgram();
// Ask for transaction history - if it's empty then key has never been used
@@ -377,9 +379,41 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Process new keys
} while (true);
Comparator<BitcoinyTransaction> newestTimestampFirstComparator = Comparator.comparingInt((BitcoinyTransaction txn) -> txn.timestamp).reversed();
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp).reversed();
return walletTransactions.stream().sorted(newestTimestampFirstComparator).collect(Collectors.toList());
return walletTransactions.stream().map(t -> convertToSimpleTransaction(t, keySet)).sorted(newestTimestampFirstComparator).collect(Collectors.toList());
}
protected SimpleTransaction convertToSimpleTransaction(BitcoinyTransaction t, Set<String> keySet) {
long amount = 0;
long total = 0L;
for (BitcoinyTransaction.Input input : t.inputs) {
try {
BitcoinyTransaction t2 = getTransaction(input.outputTxHash);
List<String> senders = t2.outputs.get(input.outputVout).addresses;
for (String sender : senders) {
if (keySet.contains(sender)) {
total += t2.outputs.get(input.outputVout).value;
}
}
} catch (ForeignBlockchainException e) {
LOGGER.trace("Failed to retrieve transaction information {}", input.outputTxHash);
}
}
if (t.outputs != null && !t.outputs.isEmpty()) {
for (BitcoinyTransaction.Output output : t.outputs) {
for (String address : output.addresses) {
if (keySet.contains(address)) {
if (total > 0L) {
amount -= (total - output.value);
} else {
amount += output.value;
}
}
}
}
}
return new SimpleTransaction(t.txHash, t.timestamp, amount);
}
/**
@@ -421,7 +455,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
* If there are no unspent outputs then either:
* a) all the outputs have been spent
* b) address has never been used
*
*
* For case (a) we want to remember not to check this address (key) again.
*/
@@ -501,7 +535,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
* If there are no unspent outputs then either:
* a) all the outputs have been spent
* b) address has never been used
*
*
* For case (a) we want to remember not to check this address (key) again.
*/

View File

@@ -10,7 +10,6 @@ import java.util.Map;
import java.util.function.Function;
import org.bitcoinj.core.Address;
import org.bitcoinj.core.Base58;
import org.bitcoinj.core.Coin;
import org.bitcoinj.core.ECKey;
import org.bitcoinj.core.LegacyAddress;
@@ -25,6 +24,7 @@ import org.bitcoinj.script.ScriptBuilder;
import org.bitcoinj.script.ScriptChunk;
import org.bitcoinj.script.ScriptOpCodes;
import org.qortal.crypto.Crypto;
import org.qortal.utils.Base58;
import org.qortal.utils.BitTwiddling;
import com.google.common.hash.HashCode;

View File

@@ -0,0 +1,32 @@
package org.qortal.crosschain;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class SimpleTransaction {
private String txHash;
private Integer timestamp;
private long totalAmount;
public SimpleTransaction() {
}
public SimpleTransaction(String txHash, Integer timestamp, long totalAmount) {
this.txHash = txHash;
this.timestamp = timestamp;
this.totalAmount = totalAmount;
}
public String getTxHash() {
return txHash;
}
public Integer getTimestamp() {
return timestamp;
}
public long getTotalAmount() {
return totalAmount;
}
}

View File

@@ -2,6 +2,7 @@ package org.qortal.data.block;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.Arrays;
@XmlAccessorType(XmlAccessType.FIELD)
public class BlockSummaryData {
@@ -84,4 +85,21 @@ public class BlockSummaryData {
this.minterLevel = minterLevel;
}
@Override
public boolean equals(Object o) {
if (this == o)
return true;
if (o == null || getClass() != o.getClass())
return false;
BlockSummaryData otherBlockSummary = (BlockSummaryData) o;
if (this.getSignature() == null || otherBlockSummary.getSignature() == null)
return false;
// Treat two block summaries as equal if they have matching signatures
return Arrays.equals(this.getSignature(), otherBlockSummary.getSignature());
}
}

View File

@@ -0,0 +1,56 @@
package org.qortal.data.block;
import org.qortal.data.network.PeerChainTipData;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.math.BigInteger;
import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class CommonBlockData {
// Properties
private BlockSummaryData commonBlockSummary = null;
private List<BlockSummaryData> blockSummariesAfterCommonBlock = null;
private BigInteger chainWeight = null;
private PeerChainTipData chainTipData = null;
// Constructors
protected CommonBlockData() {
}
public CommonBlockData(BlockSummaryData commonBlockSummary, PeerChainTipData chainTipData) {
this.commonBlockSummary = commonBlockSummary;
this.chainTipData = chainTipData;
}
// Getters / setters
public BlockSummaryData getCommonBlockSummary() {
return this.commonBlockSummary;
}
public List<BlockSummaryData> getBlockSummariesAfterCommonBlock() {
return this.blockSummariesAfterCommonBlock;
}
public void setBlockSummariesAfterCommonBlock(List<BlockSummaryData> blockSummariesAfterCommonBlock) {
this.blockSummariesAfterCommonBlock = blockSummariesAfterCommonBlock;
}
public BigInteger getChainWeight() {
return this.chainWeight;
}
public void setChainWeight(BigInteger chainWeight) {
this.chainWeight = chainWeight;
}
public PeerChainTipData getChainTipData() {
return this.chainTipData;
}
}

View File

@@ -6,6 +6,9 @@ import javax.xml.bind.annotation.XmlTransient;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import io.swagger.v3.oas.annotations.media.Schema;
import org.json.JSONObject;
import org.qortal.utils.Base58;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
@@ -205,6 +208,58 @@ public class TradeBotData {
return this.receivingAccountInfo;
}
public JSONObject toJson() {
JSONObject jsonObject = new JSONObject();
jsonObject.put("tradePrivateKey", Base58.encode(this.getTradePrivateKey()));
jsonObject.put("acctName", this.getAcctName());
jsonObject.put("tradeState", this.getState());
jsonObject.put("tradeStateValue", this.getStateValue());
jsonObject.put("creatorAddress", this.getCreatorAddress());
jsonObject.put("atAddress", this.getAtAddress());
jsonObject.put("timestamp", this.getTimestamp());
jsonObject.put("qortAmount", this.getQortAmount());
if (this.getTradeNativePublicKey() != null) jsonObject.put("tradeNativePublicKey", Base58.encode(this.getTradeNativePublicKey()));
if (this.getTradeNativePublicKeyHash() != null) jsonObject.put("tradeNativePublicKeyHash", Base58.encode(this.getTradeNativePublicKeyHash()));
jsonObject.put("tradeNativeAddress", this.getTradeNativeAddress());
if (this.getSecret() != null) jsonObject.put("secret", Base58.encode(this.getSecret()));
if (this.getHashOfSecret() != null) jsonObject.put("hashOfSecret", Base58.encode(this.getHashOfSecret()));
jsonObject.put("foreignBlockchain", this.getForeignBlockchain());
if (this.getTradeForeignPublicKey() != null) jsonObject.put("tradeForeignPublicKey", Base58.encode(this.getTradeForeignPublicKey()));
if (this.getTradeForeignPublicKeyHash() != null) jsonObject.put("tradeForeignPublicKeyHash", Base58.encode(this.getTradeForeignPublicKeyHash()));
jsonObject.put("foreignKey", this.getForeignKey());
jsonObject.put("foreignAmount", this.getForeignAmount());
if (this.getLastTransactionSignature() != null) jsonObject.put("lastTransactionSignature", Base58.encode(this.getLastTransactionSignature()));
jsonObject.put("lockTimeA", this.getLockTimeA());
if (this.getReceivingAccountInfo() != null) jsonObject.put("receivingAccountInfo", Base58.encode(this.getReceivingAccountInfo()));
return jsonObject;
}
public static TradeBotData fromJson(JSONObject json) {
return new TradeBotData(
json.isNull("tradePrivateKey") ? null : Base58.decode(json.getString("tradePrivateKey")),
json.isNull("acctName") ? null : json.getString("acctName"),
json.isNull("tradeState") ? null : json.getString("tradeState"),
json.isNull("tradeStateValue") ? null : json.getInt("tradeStateValue"),
json.isNull("creatorAddress") ? null : json.getString("creatorAddress"),
json.isNull("atAddress") ? null : json.getString("atAddress"),
json.isNull("timestamp") ? null : json.getLong("timestamp"),
json.isNull("qortAmount") ? null : json.getLong("qortAmount"),
json.isNull("tradeNativePublicKey") ? null : Base58.decode(json.getString("tradeNativePublicKey")),
json.isNull("tradeNativePublicKeyHash") ? null : Base58.decode(json.getString("tradeNativePublicKeyHash")),
json.isNull("tradeNativeAddress") ? null : json.getString("tradeNativeAddress"),
json.isNull("secret") ? null : Base58.decode(json.getString("secret")),
json.isNull("hashOfSecret") ? null : Base58.decode(json.getString("hashOfSecret")),
json.isNull("foreignBlockchain") ? null : json.getString("foreignBlockchain"),
json.isNull("tradeForeignPublicKey") ? null : Base58.decode(json.getString("tradeForeignPublicKey")),
json.isNull("tradeForeignPublicKeyHash") ? null : Base58.decode(json.getString("tradeForeignPublicKeyHash")),
json.isNull("foreignAmount") ? null : json.getLong("foreignAmount"),
json.isNull("foreignKey") ? null : json.getString("foreignKey"),
json.isNull("lastTransactionSignature") ? null : Base58.decode(json.getString("lastTransactionSignature")),
json.isNull("lockTimeA") ? null : json.getInt("lockTimeA"),
json.isNull("receivingAccountInfo") ? null : Base58.decode(json.getString("receivingAccountInfo"))
);
}
// Mostly for debugging
public String toString() {
return String.format("%s: %s (%d)", this.atAddress, this.tradeState, this.tradeStateValue);

View File

@@ -4,7 +4,6 @@ import java.util.Arrays;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -51,7 +50,7 @@ public enum Handshake {
String versionString = helloMessage.getVersionString();
Matcher matcher = VERSION_PATTERN.matcher(versionString);
Matcher matcher = peer.VERSION_PATTERN.matcher(versionString);
if (!matcher.lookingAt()) {
LOGGER.debug(() -> String.format("Peer %s sent invalid HELLO version string '%s'", peer, versionString));
return null;
@@ -72,6 +71,15 @@ public enum Handshake {
peer.setPeersConnectionTimestamp(peersConnectionTimestamp);
peer.setPeersVersion(versionString, version);
if (Settings.getInstance().getAllowConnectionsWithOlderPeerVersions() == false) {
// Ensure the peer is running at least the minimum version allowed for connections
final String minPeerVersion = Settings.getInstance().getMinPeerVersion();
if (peer.isAtLeastVersion(minPeerVersion) == false) {
LOGGER.debug(String.format("Ignoring peer %s because it is on an old version (%s)", peer, versionString));
return null;
}
}
return CHALLENGE;
}
@@ -244,8 +252,6 @@ public enum Handshake {
/** Maximum allowed difference between peer's reported timestamp and when they connected, in milliseconds. */
private static final long MAX_TIMESTAMP_DELTA = 30 * 1000L; // ms
private static final Pattern VERSION_PATTERN = Pattern.compile(Controller.VERSION_PREFIX + "(\\d{1,3})\\.(\\d{1,5})\\.(\\d{1,5})");
private static final long PEER_VERSION_131 = 0x0100030001L;
private static final int POW_BUFFER_SIZE_PRE_131 = 8 * 1024 * 1024; // bytes

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,70 @@
package org.qortal.network.message;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import org.qortal.block.Block;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import com.google.common.primitives.Ints;
// This is an OUTGOING-only Message which more readily lends itself to being cached
public class CachedBlockMessage extends Message {
private Block block = null;
private byte[] cachedBytes = null;
public CachedBlockMessage(Block block) {
super(MessageType.BLOCK);
this.block = block;
}
private CachedBlockMessage(byte[] cachedBytes) {
super(MessageType.BLOCK);
this.block = null;
this.cachedBytes = cachedBytes;
}
public static Message fromByteBuffer(int id, ByteBuffer byteBuffer) throws UnsupportedEncodingException {
throw new UnsupportedOperationException("CachedBlockMessage is for outgoing messages only");
}
@Override
protected byte[] toData() {
// Already serialized?
if (this.cachedBytes != null)
return cachedBytes;
if (this.block == null)
return null;
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(this.block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(this.block));
this.cachedBytes = bytes.toByteArray();
// We no longer need source Block
// and Block contains repository handle which is highly likely to be invalid after this call
this.block = null;
return this.cachedBytes;
} catch (TransformationException | IOException e) {
return null;
}
}
public CachedBlockMessage cloneWithNewId(int newId) {
CachedBlockMessage clone = new CachedBlockMessage(this.cachedBytes);
clone.setId(newId);
return clone;
}
}

View File

@@ -98,7 +98,7 @@ public interface ATRepository {
*/
public List<ATStateData> getMatchingFinalATStatesQuorum(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue,
int minimumCount, long minimumPeriod) throws DataException;
int minimumCount, int maximumCount, long minimumPeriod) throws DataException;
/**
* Returns all ATStateData for a given block height.

View File

@@ -1,5 +1,6 @@
package org.qortal.repository;
import java.util.EnumSet;
import java.util.List;
import java.util.Map;
@@ -251,6 +252,14 @@ public interface TransactionRepository {
*/
public List<TransactionData> getUnconfirmedTransactions(TransactionType txType, byte[] creatorPublicKey) throws DataException;
/**
* Returns list of unconfirmed transactions excluding specified type(s).
*
* @return list of transactions, or empty if none.
* @throws DataException
*/
public List<TransactionData> getUnconfirmedTransactions(EnumSet<TransactionType> excludedTxTypes) throws DataException;
/**
* Remove transaction from unconfirmed transactions pile.
*

View File

@@ -454,7 +454,7 @@ public class HSQLDBATRepository implements ATRepository {
@Override
public List<ATStateData> getMatchingFinalATStatesQuorum(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue,
int minimumCount, long minimumPeriod) throws DataException {
int minimumCount, int maximumCount, long minimumPeriod) throws DataException {
// We need most recent entry first so we can use its timestamp to slice further results
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, isFinished,
dataByteOffset, expectedValue, null,
@@ -510,7 +510,8 @@ public class HSQLDBATRepository implements ATRepository {
bindParams.add(minimumHeight);
bindParams.add(minimumCount);
sql.append("ORDER BY FinalATStates.height DESC");
sql.append("ORDER BY FinalATStates.height DESC LIMIT ?");
bindParams.add(maximumCount);
List<ATStateData> atStates = new ArrayList<>();
@@ -541,9 +542,9 @@ public class HSQLDBATRepository implements ATRepository {
public List<ATStateData> getBlockATStatesAtHeight(int height) throws DataException {
String sql = "SELECT AT_address, state_hash, fees, is_initial "
+ "FROM ATs "
+ "LEFT OUTER JOIN ATStates "
+ "ON ATStates.AT_address = ATs.AT_address AND height = ? "
+ "WHERE ATStates.AT_address IS NOT NULL "
+ "JOIN ATStates "
+ "ON ATStates.AT_address = ATs.AT_address "
+ "WHERE height = ? "
+ "ORDER BY created_when ASC";
List<ATStateData> atStates = new ArrayList<>();

View File

@@ -2,6 +2,7 @@ package org.qortal.repository.hsqldb;
import java.awt.TrayIcon.MessageType;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.math.BigDecimal;
import java.nio.file.Files;
@@ -15,23 +16,19 @@ import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Savepoint;
import java.sql.Statement;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.Deque;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Stream;
import org.json.JSONArray;
import org.json.JSONObject;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
import org.qortal.repository.ATRepository;
@@ -52,6 +49,7 @@ import org.qortal.repository.TransactionRepository;
import org.qortal.repository.VotingRepository;
import org.qortal.repository.hsqldb.transaction.HSQLDBTransactionRepository;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
public class HSQLDBRepository implements Repository {
@@ -460,28 +458,68 @@ public class HSQLDBRepository implements Repository {
@Override
public void exportNodeLocalData() throws DataException {
try (Statement stmt = this.connection.createStatement()) {
stmt.execute("PERFORM EXPORT SCRIPT FOR TABLE MintingAccounts DATA TO 'MintingAccounts.script'");
stmt.execute("PERFORM EXPORT SCRIPT FOR TABLE TradeBotStates DATA TO 'TradeBotStates.script'");
LOGGER.info("Exported sensitive/node-local data: minting keys and trade bot states");
} catch (SQLException e) {
throw new DataException("Unable to export sensitive/node-local data from repository");
// Create the qortal-backup folder if it doesn't exist
Path backupPath = Paths.get("qortal-backup");
try {
Files.createDirectories(backupPath);
} catch (IOException e) {
LOGGER.info("Unable to create backup folder");
throw new DataException("Unable to create backup folder");
}
try {
// Load trade bot data
List<TradeBotData> allTradeBotData = this.getCrossChainRepository().getAllTradeBotData();
JSONArray allTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
allTradeBotDataJson.put(tradeBotDataJson);
}
// We need to combine existing TradeBotStates data before overwriting
String fileName = "qortal-backup/TradeBotStates.json";
File tradeBotStatesBackupFile = new File(fileName);
if (tradeBotStatesBackupFile.exists()) {
String jsonString = new String(Files.readAllBytes(Paths.get(fileName)));
JSONArray allExistingTradeBotData = new JSONArray(jsonString);
Iterator<Object> iterator = allExistingTradeBotData.iterator();
while(iterator.hasNext()) {
JSONObject existingTradeBotData = (JSONObject)iterator.next();
String existingTradePrivateKey = (String) existingTradeBotData.get("tradePrivateKey");
// Check if we already have an entry for this trade
boolean found = allTradeBotData.stream().anyMatch(tradeBotData -> Base58.encode(tradeBotData.getTradePrivateKey()).equals(existingTradePrivateKey));
if (found == false)
// We need to add this to our list
allTradeBotDataJson.put(existingTradeBotData);
}
}
FileWriter writer = new FileWriter(fileName);
writer.write(allTradeBotDataJson.toString());
writer.close();
LOGGER.info("Exported sensitive/node-local data: trade bot states");
} catch (DataException | IOException e) {
throw new DataException("Unable to export trade bot states from repository");
}
}
@Override
public void importDataFromFile(String filename) throws DataException {
try (Statement stmt = this.connection.createStatement()) {
LOGGER.info(() -> String.format("Importing data into repository from %s", filename));
String escapedFilename = stmt.enquoteLiteral(filename);
stmt.execute("PERFORM IMPORT SCRIPT DATA FROM " + escapedFilename + " STOP ON ERROR");
LOGGER.info(() -> String.format("Imported data into repository from %s", filename));
} catch (SQLException e) {
LOGGER.info(() -> String.format("Failed to import data into repository from %s: %s", filename, e.getMessage()));
throw new DataException("Unable to export sensitive/node-local data from repository: " + e.getMessage());
LOGGER.info(() -> String.format("Importing data into repository from %s", filename));
try {
String jsonString = new String(Files.readAllBytes(Paths.get(filename)));
JSONArray tradeBotDataToImport = new JSONArray(jsonString);
Iterator<Object> iterator = tradeBotDataToImport.iterator();
while(iterator.hasNext()) {
JSONObject tradeBotDataJson = (JSONObject)iterator.next();
TradeBotData tradeBotData = TradeBotData.fromJson(tradeBotDataJson);
this.getCrossChainRepository().save(tradeBotData);
}
} catch (IOException e) {
throw new DataException("Unable to import sensitive/node-local trade bot states to repository: " + e.getMessage());
}
LOGGER.info(() -> String.format("Imported trade bot states into repository from %s", filename));
}
@Override
@@ -681,7 +719,7 @@ public class HSQLDBRepository implements Repository {
/**
* Execute PreparedStatement and return changed row count.
*
* @param preparedStatement
* @param sql
* @param objects
* @return number of changed rows
* @throws SQLException
@@ -693,8 +731,8 @@ public class HSQLDBRepository implements Repository {
/**
* Execute batched PreparedStatement
*
* @param preparedStatement
* @param objects
* @param sql
* @param batchedObjects
* @return number of changed rows
* @throws SQLException
*/
@@ -818,7 +856,7 @@ public class HSQLDBRepository implements Repository {
*
* @param tableName
* @param whereClause
* @param objects
* @param batchedObjects
* @throws SQLException
*/
public int deleteBatch(String tableName, String whereClause, List<Object[]> batchedObjects) throws SQLException {
@@ -931,6 +969,8 @@ public class HSQLDBRepository implements Repository {
/** Logs other HSQLDB sessions then returns passed exception */
public SQLException examineException(SQLException e) {
// TODO: could log at DEBUG for deadlocks by checking RepositoryManager.isDeadlockRelated(e)?
LOGGER.error(() -> String.format("[Session %d] HSQLDB error: %s", this.sessionId, e.getMessage()), e);
logStatements();

View File

@@ -14,11 +14,11 @@ import org.hsqldb.jdbc.HSQLDBPool;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
import org.qortal.settings.Settings;
public class HSQLDBRepositoryFactory implements RepositoryFactory {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBRepositoryFactory.class);
private static final int POOL_SIZE = 100;
/** Log getConnection() calls that take longer than this. (ms) */
private static final long SLOW_CONNECTION_THRESHOLD = 1000L;
@@ -57,7 +57,7 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
HSQLDBRepository.attemptRecovery(connectionUrl);
}
this.connectionPool = new HSQLDBPool(POOL_SIZE);
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
this.connectionPool.setUrl(this.connectionUrl);
Properties properties = new Properties();

View File

@@ -9,6 +9,7 @@ import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.EnumMap;
import java.util.EnumSet;
import java.util.List;
import java.util.Map;
@@ -1181,6 +1182,51 @@ public class HSQLDBTransactionRepository implements TransactionRepository {
}
}
@Override
public List<TransactionData> getUnconfirmedTransactions(EnumSet<TransactionType> excludedTxTypes) throws DataException {
StringBuilder sql = new StringBuilder(1024);
sql.append("SELECT signature FROM UnconfirmedTransactions ");
sql.append("JOIN Transactions USING (signature) ");
sql.append("WHERE type NOT IN (");
boolean firstTxType = true;
for (TransactionType txType : excludedTxTypes) {
if (firstTxType)
firstTxType = false;
else
sql.append(", ");
sql.append(txType.value);
}
sql.append(")");
sql.append("ORDER BY created_when, signature");
List<TransactionData> transactions = new ArrayList<>();
// Find transactions with no corresponding row in BlockTransactions
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString())) {
if (resultSet == null)
return transactions;
do {
byte[] signature = resultSet.getBytes(1);
TransactionData transactionData = this.fromSignature(signature);
if (transactionData == null)
// Something inconsistent with the repository
throw new DataException(String.format("Unable to fetch unconfirmed transaction %s from repository?", Base58.encode(signature)));
transactions.add(transactionData);
} while (resultSet.next());
return transactions;
} catch (SQLException | DataException e) {
throw new DataException("Unable to fetch unconfirmed transactions from repository", e);
}
}
@Override
public void confirmTransaction(byte[] signature) throws DataException {
try {

View File

@@ -52,7 +52,7 @@ public class Settings {
// UI servers
private int uiPort = 12388;
private String[] uiLocalServers = new String[] {
"localhost", "127.0.0.1", "172.24.1.1", "qor.tal"
"localhost", "127.0.0.1"
};
private String[] uiRemoteServers = new String[] {
"node1.qortal.org", "node2.qortal.org", "node3.qortal.org", "node4.qortal.org", "node5.qortal.org",
@@ -89,6 +89,8 @@ public class Settings {
private long repositoryCheckpointInterval = 60 * 60 * 1000L; // 1 hour (ms) default
/** Whether to show a notification when we perform repository 'checkpoint'. */
private boolean showCheckpointNotification = false;
/* How many blocks to cache locally. Defaulted to 10, which covers a typical Synchronizer request + a few spare */
private int blockCacheSize = 10;
/** How long to keep old, full, AT state data (ms). */
private long atStatesMaxLifetime = 2 * 7 * 24 * 60 * 60 * 1000L; // milliseconds
@@ -120,6 +122,15 @@ public class Settings {
private int maxNetworkThreadPoolSize = 20;
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
private int networkPoWComputePoolSize = 2;
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
private int maxRetries = 2;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "1.5.0";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */
private boolean allowConnectionsWithOlderPeerVersions = true;
// Which blockchains this node is running
private String blockchainConfig = null; // use default from resources
@@ -134,6 +145,8 @@ public class Settings {
private Long slowQueryThreshold = null;
/** Repository storage path. */
private String repositoryPath = "db";
/** Repository connection pool size. Needs to be a bit bigger than maxNetworkThreadPoolSize */
private int repositoryConnectionPoolSize = 100;
// Auto-update sources
private String[] autoUpdateRepos = new String[] {
@@ -361,6 +374,10 @@ public class Settings {
return this.maxTransactionTimestampFuture;
}
public int getBlockCacheSize() {
return this.blockCacheSize;
}
public boolean isTestNet() {
return this.isTestNet;
}
@@ -400,6 +417,12 @@ public class Settings {
return this.networkPoWComputePoolSize;
}
public int getMaxRetries() { return this.maxRetries; }
public String getMinPeerVersion() { return this.minPeerVersion; }
public boolean getAllowConnectionsWithOlderPeerVersions() { return this.allowConnectionsWithOlderPeerVersions; }
public String getBlockchainConfig() {
return this.blockchainConfig;
}
@@ -424,6 +447,10 @@ public class Settings {
return this.repositoryPath;
}
public int getRepositoryConnectionPoolSize() {
return this.repositoryConnectionPoolSize;
}
public boolean isAutoUpdateEnabled() {
return this.autoUpdateEnabled;
}

View File

@@ -4,6 +4,7 @@ import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.EnumSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
@@ -605,7 +606,8 @@ public abstract class Transaction {
public static List<TransactionData> getUnconfirmedTransactions(Repository repository) throws DataException {
BlockData latestBlockData = repository.getBlockRepository().getLastBlock();
List<TransactionData> unconfirmedTransactions = repository.getTransactionRepository().getUnconfirmedTransactions();
EnumSet<TransactionType> excludedTxTypes = EnumSet.of(TransactionType.CHAT, TransactionType.PRESENCE);
List<TransactionData> unconfirmedTransactions = repository.getTransactionRepository().getUnconfirmedTransactions(excludedTxTypes);
unconfirmedTransactions.sort(getDataComparator());

View File

@@ -326,24 +326,36 @@ public class BlockTransformer extends Transformer {
}
}
public static byte[] getMinterSignatureFromReference(byte[] blockReference) {
return Arrays.copyOf(blockReference, MINTER_SIGNATURE_LENGTH);
private static byte[] getReferenceBytesForMinterSignature(int blockHeight, byte[] reference) {
int newBlockSigTriggerHeight = BlockChain.getInstance().getNewBlockSigHeight();
return blockHeight >= newBlockSigTriggerHeight
// 'new' block sig uses all of previous block's signature
? reference
// 'old' block sig only uses first 64 bytes of previous block's signature
: Arrays.copyOf(reference, MINTER_SIGNATURE_LENGTH);
}
public static byte[] getBytesForMinterSignature(BlockData blockData) throws TransformationException {
byte[] minterSignature = getMinterSignatureFromReference(blockData.getReference());
public static byte[] getBytesForMinterSignature(BlockData blockData) {
byte[] referenceBytes = getReferenceBytesForMinterSignature(blockData.getHeight(), blockData.getReference());
return getBytesForMinterSignature(minterSignature, blockData.getMinterPublicKey(), blockData.getEncodedOnlineAccounts());
return getBytesForMinterSignature(referenceBytes, blockData.getMinterPublicKey(), blockData.getEncodedOnlineAccounts());
}
public static byte[] getBytesForMinterSignature(byte[] minterSignature, byte[] minterPublicKey, byte[] encodedOnlineAccounts) {
byte[] bytes = new byte[MINTER_SIGNATURE_LENGTH + MINTER_PUBLIC_KEY_LENGTH + encodedOnlineAccounts.length];
public static byte[] getBytesForMinterSignature(BlockData parentBlockData, byte[] minterPublicKey, byte[] encodedOnlineAccounts) {
byte[] referenceBytes = getReferenceBytesForMinterSignature(parentBlockData.getHeight() + 1, parentBlockData.getSignature());
System.arraycopy(minterSignature, 0, bytes, 0, MINTER_SIGNATURE_LENGTH);
return getBytesForMinterSignature(referenceBytes, minterPublicKey, encodedOnlineAccounts);
}
System.arraycopy(minterPublicKey, 0, bytes, MINTER_SIGNATURE_LENGTH, MINTER_PUBLIC_KEY_LENGTH);
private static byte[] getBytesForMinterSignature(byte[] referenceBytes, byte[] minterPublicKey, byte[] encodedOnlineAccounts) {
byte[] bytes = new byte[referenceBytes.length + MINTER_PUBLIC_KEY_LENGTH + encodedOnlineAccounts.length];
System.arraycopy(encodedOnlineAccounts, 0, bytes, MINTER_SIGNATURE_LENGTH + MINTER_PUBLIC_KEY_LENGTH, encodedOnlineAccounts.length);
System.arraycopy(referenceBytes, 0, bytes, 0, referenceBytes.length);
System.arraycopy(minterPublicKey, 0, bytes, referenceBytes.length, MINTER_PUBLIC_KEY_LENGTH);
System.arraycopy(encodedOnlineAccounts, 0, bytes, referenceBytes.length + MINTER_PUBLIC_KEY_LENGTH, encodedOnlineAccounts.length);
return bytes;
}

Binary file not shown.

View File

@@ -48,7 +48,10 @@
"minutesPerBlock": 1
},
"featureTriggers": {
"atFindNextTransactionFix": 275000
"atFindNextTransactionFix": 275000,
"newBlockSigHeight": 320000,
"shareBinFix": 399000,
"calcChainWeightTimestamp": 1620579600000
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,72 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Italian translation by Pabs 2021
# La modifica della lingua dell'UI è fatta nel file Settings.json
#
# "localeLang": "it",
# Si prega ricordare la virgola alla fine, se questo comando non è sull'ultima riga
ADDRESS_UNKNOWN = indirizzo account sconosciuto
BLOCKCHAIN_NEEDS_SYNC = blockchain deve prima sincronizzarsi
# Blocks
BLOCK_UNKNOWN = blocco sconosciuto
BTC_BALANCE_ISSUE = saldo Bitcoin insufficiente
BTC_NETWORK_ISSUE = Bitcoin/ElectrumX problema di rete
BTC_TOO_SOON = troppo presto per trasmettere transazione Bitcoin (tempo di blocco / tempo di blocco mediano)
CANNOT_MINT = l'account non può coniare
GROUP_UNKNOWN = gruppo sconosciuto
INVALID_ADDRESS = indirizzo non valido
# Assets
INVALID_ASSET_ID = identificazione risorsa non valida
INVALID_CRITERIA = criteri di ricerca non validi
INVALID_DATA = dati non validi
INVALID_HEIGHT = altezza blocco non valida
INVALID_NETWORK_ADDRESS = indirizzo di rete non valido
INVALID_ORDER_ID = identificazione di ordine di risorsa non valida
INVALID_PRIVATE_KEY = chiave privata non valida
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_REFERENCE = riferimento non valido
# Validation
INVALID_SIGNATURE = firma non valida
JSON = Impossibile analizzare il messaggio JSON
NAME_UNKNOWN = nome sconosciuto
NON_PRODUCTION = questa chiamata API non è consentita per i sistemi di produzione
NO_TIME_SYNC = nessuna sincronizzazione dell'orologio ancora
ORDER_UNKNOWN = identificazione di ordine di risorsa sconosciuta
PUBLIC_KEY_NOT_FOUND = chiave pubblica non trovata
REPOSITORY_ISSUE = errore del repositorio
# This one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transazione non valida: %s (%s)
TRANSACTION_UNKNOWN = transazione sconosciuta
TRANSFORMATION_ERROR = non è stato possibile trasformare JSON in transazione
UNAUTHORIZED = Chiamata API non autorizzata

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
# Italian translation by Pabs 2021
APPLYING_UPDATE_AND_RESTARTING = Applicando aggiornamento automatico e riavviando...
AUTO_UPDATE = Aggiornamento automatico
BLOCK_HEIGHT = altezza
CHECK_TIME_ACCURACY = Controlla la precisione dell'ora
CONNECTING = Collegando
CONNECTION = connessione
CONNECTIONS = connessioni
CREATING_BACKUP_OF_DB_FILES = Creazione di backup dei file di database...
DB_BACKUP = Backup del database
DB_CHECKPOINT = Punto di controllo del database
EXIT = Uscita
MINTING_DISABLED = NON coniando
MINTING_ENABLED = \u2714 Coniando
# Nagging about lack of NTP time sync
NTP_NAG_CAPTION = L'orologio del computer è impreciso!
NTP_NAG_TEXT_UNIX = Installare servizio NTP per ottenere un orologio preciso.
NTP_NAG_TEXT_WINDOWS = Seleziona "Sincronizza orologio" dal menu per correggere.
OPEN_UI = Apri UI
PERFORMING_DB_CHECKPOINT = Salvataggio delle modifiche al database non salvate...
SYNCHRONIZE_CLOCK = Sincronizza orologio
SYNCHRONIZING_BLOCKCHAIN = Sincronizzando
SYNCHRONIZING_CLOCK = Sincronizzando orologio

View File

@@ -0,0 +1,185 @@
# Italian translation by Pabs 2021
ACCOUNT_ALREADY_EXISTS = l'account gia esiste
ACCOUNT_CANNOT_REWARD_SHARE = l'account non può fare la condivisione di ricompensa
ALREADY_GROUP_ADMIN = è già amministratore del gruppo
ALREADY_GROUP_MEMBER = è già membro del gruppo
ALREADY_VOTED_FOR_THAT_OPTION = già votato per questa opzione
ASSET_ALREADY_EXISTS = risorsa già esistente
ASSET_DOES_NOT_EXIST = risorsa non esistente
ASSET_DOES_NOT_MATCH_AT = l'asset non corrisponde all'asset di AT
ASSET_NOT_SPENDABLE = la risorsa non è spendibile
AT_ALREADY_EXISTS = AT gia esiste
AT_IS_FINISHED = AT ha finito
AT_UNKNOWN = AT sconosciuto
BANNED_FROM_GROUP = divietato dal gruppo
BAN_EXISTS = il divieto esiste già
BAN_UNKNOWN = divieto sconosciuto
BUYER_ALREADY_OWNER = l'acquirente è già proprietario
CHAT = Le transazioni CHAT non sono mai valide per l'inclusione nei blocchi
CLOCK_NOT_SYNCED = orologio non sincronizzato
DUPLICATE_OPTION = opzione duplicata
GROUP_ALREADY_EXISTS = gruppo già esistente
GROUP_APPROVAL_DECIDED = approvazione di gruppo già decisa
GROUP_APPROVAL_NOT_REQUIRED = approvazione di gruppo non richiesto
GROUP_DOES_NOT_EXIST = gruppo non esiste
GROUP_ID_MISMATCH = identificazione di gruppo non corrispondente
GROUP_OWNER_CANNOT_LEAVE = il proprietario del gruppo non può lasciare il gruppo
HAVE_EQUALS_WANT = la risorsa avere è uguale a la risorsa volere
INCORRECT_NONCE = PoW nonce sbagliato
INSUFFICIENT_FEE = tariffa insufficiente
INVALID_ADDRESS = indirizzo non valido
INVALID_AMOUNT = importo non valido
INVALID_ASSET_OWNER = proprietario della risorsa non valido
INVALID_AT_TRANSACTION = transazione AT non valida
INVALID_AT_TYPE_LENGTH = lunghezza di "tipo" AT non valida
INVALID_CREATION_BYTES = byte di creazione non validi
INVALID_DATA_LENGTH = lunghezza di dati non valida
INVALID_DESCRIPTION_LENGTH = lunghezza della descrizione non valida
INVALID_GROUP_APPROVAL_THRESHOLD = soglia di approvazione del gruppo non valida
INVALID_GROUP_BLOCK_DELAY = ritardo del blocco di approvazione del gruppo non valido
INVALID_GROUP_ID = identificazione di gruppo non valida
INVALID_GROUP_OWNER = proprietario di gruppo non valido
INVALID_LIFETIME = durata della vita non valida
INVALID_NAME_LENGTH = lunghezza del nome non valida
INVALID_NAME_OWNER = proprietario del nome non valido
INVALID_OPTIONS_COUNT = conteggio di opzioni non validi
INVALID_OPTION_LENGTH = lunghezza di opzioni non valida
INVALID_ORDER_CREATOR = creatore dell'ordine non valido
INVALID_PAYMENTS_COUNT = conteggio pagamenti non validi
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_QUANTITY = quantità non valida
INVALID_REFERENCE = riferimento non valido
INVALID_RETURN = ritorno non valido
INVALID_REWARD_SHARE_PERCENT = percentuale condivisione di ricompensa non valida
INVALID_SELLER = venditore non valido
INVALID_TAGS_LENGTH = lunghezza dei "tag" non valida
INVALID_TX_GROUP_ID = identificazione di gruppo di transazioni non valida
INVALID_VALUE_LENGTH = lunghezza "valore" non valida
INVITE_UNKNOWN = invito di gruppo sconosciuto
JOIN_REQUEST_EXISTS = la richiesta di iscrizione al gruppo già esiste
MAXIMUM_REWARD_SHARES = numero massimo di condivisione di ricompensa raggiunto per l'account
MISSING_CREATOR = creatore mancante
MULTIPLE_NAMES_FORBIDDEN = è vietata la registrazione di multipli nomi per account
NAME_ALREADY_FOR_SALE = nome già in vendita
NAME_ALREADY_REGISTERED = nome già registrato
NAME_DOES_NOT_EXIST = il nome non esiste
NAME_NOT_FOR_SALE = il nome non è in vendita
NAME_NOT_NORMALIZED = il nome non è in forma "normalizzata" Unicode
NEGATIVE_AMOUNT = importo non valido / negativo
NEGATIVE_FEE = tariffa non valida / negativa
NEGATIVE_PRICE = prezzo non valido / negativo
NOT_GROUP_ADMIN = l'account non è un amministratore di gruppo
NOT_GROUP_MEMBER = l'account non è un membro del gruppo
NOT_MINTING_ACCOUNT = l'account non può coniare
NOT_YET_RELEASED = funzione non ancora rilasciata
NO_BALANCE = equilibrio insufficiente
NO_BLOCKCHAIN_LOCK = nodo di blockchain attualmente occupato
NO_FLAG_PERMISSION = l'account non dispone di questa autorizzazione
OK = OK
ORDER_ALREADY_CLOSED = l'ordine di scambio di risorsa è già chiuso
ORDER_DOES_NOT_EXIST = l'ordine di scambio di risorsa non esiste
POLL_ALREADY_EXISTS = il sondaggio già esiste
POLL_DOES_NOT_EXIST = il sondaggio non esiste
POLL_OPTION_DOES_NOT_EXIST = le opzioni di sondaggio non esistono
PUBLIC_KEY_UNKNOWN = chiave pubblica sconosciuta
REWARD_SHARE_UNKNOWN = condivisione di ricompensa sconosciuta
SELF_SHARE_EXISTS = condivisione di sé (condivisione di ricompensa) già esiste
TIMESTAMP_TOO_NEW = timestamp troppo nuovo
TIMESTAMP_TOO_OLD = timestamp troppo vecchio
TOO_MANY_UNCONFIRMED = l'account ha troppe transazioni non confermate in sospeso
TRANSACTION_ALREADY_CONFIRMED = la transazione è già confermata
TRANSACTION_ALREADY_EXISTS = la transazione già esiste
TRANSACTION_UNKNOWN = transazione sconosciuta
TX_GROUP_ID_MISMATCH = identificazione di gruppo della transazione non corrisponde

View File

@@ -7,6 +7,7 @@ import java.util.List;
import java.util.stream.Collectors;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.Block;
@@ -83,6 +84,7 @@ public class BlockTests extends Common {
}
@Test
@Ignore(value = "Doesn't work, to be fixed later")
public void testBlockSerialization() throws DataException, TransformationException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount signingAccount = Common.getTestAccount(repository, "alice");

View File

@@ -3,12 +3,15 @@ package org.qortal.test;
import static org.junit.Assert.*;
import java.math.BigInteger;
import java.text.DecimalFormat;
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import org.qortal.account.Account;
import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -17,12 +20,21 @@ import org.qortal.test.common.Common;
import org.qortal.test.common.TestAccount;
import org.qortal.transform.Transformer;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.NTP;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class ChainWeightTests extends Common {
private static final Random RANDOM = new Random();
private static final NumberFormat FORMATTER = new DecimalFormat("0.###E0");
@BeforeClass
public static void beforeClass() {
// We need this so that NTP.getTime() in Block.calcChainWeight() doesn't return null, causing NPE
NTP.setFixedOffset(0L);
}
@Before
public void beforeTest() throws DataException {
@@ -89,7 +101,97 @@ public class ChainWeightTests extends Common {
}
}
// Check that a longer chain beats a shorter chain
// Demonstrates that typical key distance ranges from roughly 1E75 to 1E77
@Test
public void testKeyDistances() {
byte[] parentMinterKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
byte[] testKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
for (int i = 0; i < 50; ++i) {
int parentHeight = RANDOM.nextInt(50000);
RANDOM.nextBytes(parentMinterKey);
RANDOM.nextBytes(testKey);
int minterLevel = RANDOM.nextInt(10) + 1;
BigInteger keyDistance = Block.calcKeyDistance(parentHeight, parentMinterKey, testKey, minterLevel);
System.out.println(String.format("Parent height: %d, minter level: %d, distance: %s",
parentHeight,
minterLevel,
FORMATTER.format(keyDistance)));
}
}
// If typical key distance ranges from 1E75 to 1E77
// then we want lots of online accounts to push a 1E75 distance
// towards 1E77 so that it competes with a 1E77 key that has hardly any online accounts
// 1E75 is approx. 2**249 so maybe that's a good value for Block.ACCOUNTS_COUNT_SHIFT
@Test
public void testMoreAccountsVersusKeyDistance() throws DataException {
BigInteger minimumBetterKeyDistance = BigInteger.TEN.pow(77);
BigInteger maximumWorseKeyDistance = BigInteger.TEN.pow(75);
try (final Repository repository = RepositoryManager.getRepository()) {
final byte[] parentMinterKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
TestAccount betterAccount = Common.getTestAccount(repository, "bob-reward-share");
byte[] betterKey = betterAccount.getPublicKey();
int betterMinterLevel = Account.getRewardShareEffectiveMintingLevel(repository, betterKey);
TestAccount worseAccount = Common.getTestAccount(repository, "dilbert-reward-share");
byte[] worseKey = worseAccount.getPublicKey();
int worseMinterLevel = Account.getRewardShareEffectiveMintingLevel(repository, worseKey);
// This is to check that the hard-coded keys ARE actually better/worse as expected, before moving on testing more online accounts
BigInteger betterKeyDistance;
BigInteger worseKeyDistance;
int parentHeight = 0;
do {
++parentHeight;
betterKeyDistance = Block.calcKeyDistance(parentHeight, parentMinterKey, betterKey, betterMinterLevel);
worseKeyDistance = Block.calcKeyDistance(parentHeight, parentMinterKey, worseKey, worseMinterLevel);
} while (betterKeyDistance.compareTo(minimumBetterKeyDistance) < 0 || worseKeyDistance.compareTo(maximumWorseKeyDistance) > 0);
System.out.println(String.format("Parent height: %d, better key distance: %s, worse key distance: %s",
parentHeight,
FORMATTER.format(betterKeyDistance),
FORMATTER.format(worseKeyDistance)));
for (int accountsCountShift = 244; accountsCountShift <= 256; accountsCountShift += 2) {
for (int worseAccountsCount = 1; worseAccountsCount <= 101; worseAccountsCount += 25) {
for (int betterAccountsCount = 1; betterAccountsCount <= 1001; betterAccountsCount += 250) {
BlockSummaryData worseKeyBlockSummary = new BlockSummaryData(parentHeight + 1, null, worseKey, betterAccountsCount);
BlockSummaryData betterKeyBlockSummary = new BlockSummaryData(parentHeight + 1, null, betterKey, worseAccountsCount);
populateBlockSummaryMinterLevel(repository, worseKeyBlockSummary);
populateBlockSummaryMinterLevel(repository, betterKeyBlockSummary);
BigInteger worseKeyBlockWeight = calcBlockWeight(parentHeight, parentMinterKey, worseKeyBlockSummary, accountsCountShift);
BigInteger betterKeyBlockWeight = calcBlockWeight(parentHeight, parentMinterKey, betterKeyBlockSummary, accountsCountShift);
System.out.println(String.format("Shift: %d, worse key: %d accounts, %s diff; better key: %d accounts: %s diff; winner: %s",
accountsCountShift,
betterAccountsCount, // used with worseKey
FORMATTER.format(worseKeyBlockWeight),
worseAccountsCount, // used with betterKey
FORMATTER.format(betterKeyBlockWeight),
worseKeyBlockWeight.compareTo(betterKeyBlockWeight) > 0 ? "worse key/better accounts" : "better key/worse accounts"
));
}
}
System.out.println();
}
}
}
private static BigInteger calcBlockWeight(int parentHeight, byte[] parentBlockSignature, BlockSummaryData blockSummaryData, int accountsCountShift) {
BigInteger keyDistance = Block.calcKeyDistance(parentHeight, parentBlockSignature, blockSummaryData.getMinterPublicKey(), blockSummaryData.getMinterLevel());
return BigInteger.valueOf(blockSummaryData.getOnlineAccountsCount()).shiftLeft(accountsCountShift).add(keyDistance);
}
// Check that a longer chain has same weight as shorter/truncated chain
@Test
public void testLongerChain() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -97,18 +199,20 @@ public class ChainWeightTests extends Common {
BlockSummaryData commonBlockSummary = genBlockSummary(repository, commonBlockHeight);
byte[] commonBlockGeneratorKey = commonBlockSummary.getMinterPublicKey();
List<BlockSummaryData> shorterChain = genBlockSummaries(repository, 3, commonBlockSummary);
List<BlockSummaryData> longerChain = genBlockSummaries(repository, shorterChain.size() + 1, commonBlockSummary);
populateBlockSummariesMinterLevels(repository, shorterChain);
List<BlockSummaryData> longerChain = genBlockSummaries(repository, 6, commonBlockSummary);
populateBlockSummariesMinterLevels(repository, longerChain);
List<BlockSummaryData> shorterChain = longerChain.subList(0, longerChain.size() / 2);
final int mutualHeight = commonBlockHeight - 1 + Math.min(shorterChain.size(), longerChain.size());
BigInteger shorterChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockGeneratorKey, shorterChain, mutualHeight);
BigInteger longerChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockGeneratorKey, longerChain, mutualHeight);
assertEquals("longer chain should have greater weight", 1, longerChainWeight.compareTo(shorterChainWeight));
if (NTP.getTime() >= BlockChain.getInstance().getCalcChainWeightTimestamp())
assertEquals("longer chain should have same weight", 0, longerChainWeight.compareTo(shorterChainWeight));
else
assertEquals("longer chain should have greater weight", 1, longerChainWeight.compareTo(shorterChainWeight));
}
}

View File

@@ -6,12 +6,12 @@ import org.qortal.block.BlockChain;
import org.qortal.crypto.BouncyCastle25519;
import org.qortal.crypto.Crypto;
import org.qortal.test.common.Common;
import org.qortal.utils.Base58;
import static org.junit.Assert.*;
import java.security.SecureRandom;
import org.bitcoinj.core.Base58;
import org.bouncycastle.crypto.agreement.X25519Agreement;
import org.bouncycastle.crypto.params.Ed25519PrivateKeyParameters;
import org.bouncycastle.crypto.params.Ed25519PublicKeyParameters;

View File

@@ -2,10 +2,12 @@ package org.qortal.test;
import java.awt.TrayIcon.MessageType;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.gui.SplashFrame;
import org.qortal.gui.SysTray;
@Ignore
public class GuiTests {
@Test

View File

@@ -1,5 +1,6 @@
package org.qortal.test;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.crypto.MemoryPoW;
@@ -7,6 +8,7 @@ import static org.junit.Assert.*;
import java.util.Random;
@Ignore
public class MemoryPoWTests {
private static final int workBufferLength = 8 * 1024 * 1024;

View File

@@ -1,5 +1,6 @@
package org.qortal.test;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.data.transaction.TransactionData;
@@ -37,6 +38,7 @@ public class SerializationTests extends Common {
}
@Test
@Ignore(value = "Doesn't work, to be fixed later")
public void testTransactions() throws DataException, TransformationException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount signingAccount = Common.getTestAccount(repository, "alice");

View File

@@ -2,6 +2,7 @@ package org.qortal.test;
import org.junit.After;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
@@ -30,6 +31,7 @@ import static org.junit.Assert.*;
import java.util.List;
import java.util.Random;
@Ignore(value = "Doesn't work, to be fixed later")
public class TransferPrivsTests extends Common {
private static List<Integer> cumulativeBlocksByLevel;

View File

@@ -5,6 +5,7 @@ import static org.junit.Assert.*;
import java.util.Collections;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.api.resource.AddressesResource;
import org.qortal.test.common.ApiCommon;
@@ -24,6 +25,7 @@ public class AddressesApiTests extends ApiCommon {
}
@Test
@Ignore(value = "Doesn't work, to be fixed later")
public void testGetOnlineAccounts() {
assertNotNull(this.addressesResource.getOnlineAccounts());
}

View File

@@ -3,7 +3,6 @@ package org.qortal.test.apps;
import java.math.BigDecimal;
import java.security.Security;
import org.bitcoinj.core.Base58;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.qortal.block.BlockChain;
@@ -17,6 +16,7 @@ import org.qortal.repository.RepositoryManager;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Base58;
import org.roaringbitmap.IntIterator;
import io.druid.extendedset.intset.ConciseSet;

View File

@@ -4,6 +4,7 @@ import static org.junit.Assert.*;
import org.junit.After;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.crosschain.Bitcoin;
import org.qortal.crosschain.ForeignBlockchainException;
@@ -43,6 +44,7 @@ public class HtlcTests extends Common {
}
@Test
@Ignore(value = "Doesn't work, to be fixed later")
public void testHtlcSecretCaching() throws ForeignBlockchainException {
String p2shAddress = "2N8WCg52ULCtDSMjkgVTm5mtPdCsUptkHWE";
byte[] expectedSecret = "This string is exactly 32 bytes!".getBytes();

View File

@@ -8,6 +8,7 @@ import org.bitcoinj.core.Transaction;
import org.bitcoinj.store.BlockStoreException;
import org.junit.After;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.Litecoin;
@@ -50,6 +51,7 @@ public class LitecoinTests extends Common {
}
@Test
@Ignore(value = "Doesn't work, to be fixed later")
public void testFindHtlcSecret() throws ForeignBlockchainException {
// This actually exists on TEST3 but can take a while to fetch
String p2shAddress = "2N8WCg52ULCtDSMjkgVTm5mtPdCsUptkHWE";

View File

@@ -8,11 +8,7 @@ import java.util.stream.Collectors;
import org.bitcoinj.core.AddressFormatException;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.qortal.crosschain.Bitcoin;
import org.qortal.crosschain.Bitcoiny;
import org.qortal.crosschain.BitcoinyTransaction;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.Litecoin;
import org.qortal.crosschain.*;
import org.qortal.settings.Settings;
public class GetWalletTransactions {
@@ -69,7 +65,7 @@ public class GetWalletTransactions {
System.out.println(String.format("Using %s", bitcoiny.getBlockchainProvider().getNetId()));
// Grab all outputs from transaction
List<BitcoinyTransaction> transactions = null;
List<SimpleTransaction> transactions = null;
try {
transactions = bitcoiny.getWalletTransactions(key58);
} catch (ForeignBlockchainException e) {
@@ -79,7 +75,7 @@ public class GetWalletTransactions {
System.out.println(String.format("Found %d transaction%s", transactions.size(), (transactions.size() != 1 ? "s" : "")));
for (BitcoinyTransaction transaction : transactions.stream().sorted(Comparator.comparingInt(t -> t.timestamp)).collect(Collectors.toList()))
for (SimpleTransaction transaction : transactions.stream().sorted(Comparator.comparingInt(SimpleTransaction::getTimestamp)).collect(Collectors.toList()))
System.out.println(String.format("%s", transaction));
}

View File

@@ -7,7 +7,8 @@ import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.bitcoinj.core.Base58;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
@@ -25,9 +26,10 @@ import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.TestAccount;
import org.qortal.utils.Amounts;
import org.qortal.utils.Base58;
public class RewardTests extends Common {
private static final Logger LOGGER = LogManager.getLogger(RewardTests.class);
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
@@ -130,19 +132,19 @@ public class RewardTests extends Common {
/*
* Example:
*
*
* Block reward is 100 QORT, QORA-holders' share is 0.20 (20%) = 20 QORT
*
*
* We hold 100 QORA
* Someone else holds 28 QORA
* Total QORA held: 128 QORA
*
*
* Our portion of that is 100 QORA / 128 QORA * 20 QORT = 15.625 QORT
*
*
* QORA holders earn at most 1 QORT per 250 QORA held.
*
*
* So we can earn at most 100 QORA / 250 QORAperQORT = 0.4 QORT
*
*
* Thus our block earning should be capped to 0.4 QORT.
*/
@@ -289,7 +291,7 @@ public class RewardTests extends Common {
* Dilbert is only account 'online'.
* No founders online.
* Some legacy QORA holders.
*
*
* So Dilbert should receive 100% - legacy QORA holder's share.
*/
@@ -336,4 +338,462 @@ public class RewardTests extends Common {
}
}
}
/** Test rewards for level 1 and 2 accounts both pre and post the shareBinFix, including orphaning back through the feature trigger block */
@Test
public void testLevel1And2Rewards() throws DataException {
Common.useSettings("test-settings-v2-reward-levels.json");
try (final Repository repository = RepositoryManager.getRepository()) {
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
byte[] chloeRewardSharePrivateKey;
// Bob self-share NOT online
// Chloe self share online
try {
chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
} catch (IllegalArgumentException ex) {
LOGGER.error("FAILED {}", ex.getLocalizedMessage(), ex);
throw ex;
}
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint a couple of blocks so that we are able to orphan them later
for (int i=0; i<2; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect
assertEquals(1, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(2, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Ensure that only Alice is a founder
assertEquals(1, getFlags(repository, "alice"));
assertEquals(0, getFlags(repository, "bob"));
assertEquals(0, getFlags(repository, "chloe"));
assertEquals(0, getFlags(repository, "dilbert"));
// Now that everyone is at level 1 or 2, we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are at the correct height and block reward value
assertEquals(6, (int) repository.getBlockRepository().getLastBlock().getHeight());
assertEquals(10000000000L, blockReward);
/*
* Alice, Chloe, and Dilbert are 'online'. Bob is offline.
* Chloe is level 1, Dilbert is level 2.
* One founder online (Alice, who is also level 1).
* No legacy QORA holders.
*
* Chloe and Dilbert should receive equal shares of the 5% block reward for Level 1 and 2
* Alice should receive the remainder (95%)
*/
// We are after the shareBinFix feature trigger, so we expect level 1 and 2 to share the same reward (5%)
final int level1And2SharePercent = 5_00; // 5%
final long level1And2ShareAmount = (blockReward * level1And2SharePercent) / 100L / 100L;
final long expectedReward = level1And2ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level1And2ShareAmount; // Alice should receive the remainder
// Validate the balances to ensure that the correct post-shareBinFix distribution is being applied
assertEquals(500000000, level1And2ShareAmount);
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedReward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedReward);
// Now orphan the latest block. This brings us to the threshold of the shareBinFix feature trigger.
BlockUtils.orphanBlocks(repository, 1);
assertEquals(5, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Ensure the latest post-fix block rewards have been subtracted and they have returned to their initial values
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
// Orphan another block. This time, the block that was orphaned was prior to the shareBinFix feature trigger.
BlockUtils.orphanBlocks(repository, 1);
assertEquals(4, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Prior to the fix, the levels were incorrectly grouped
// Chloe should receive 100% of the level 1 reward, and Dilbert should receive 100% of the level 2+3 reward
final int level1SharePercent = 5_00; // 5%
final int level2And3SharePercent = 10_00; // 10%
final long level1ShareAmountBeforeFix = (blockReward * level1SharePercent) / 100L / 100L;
final long level2And3ShareAmountBeforeFix = (blockReward * level2And3SharePercent) / 100L / 100L;
final long expectedFounderRewardBeforeFix = blockReward - level1ShareAmountBeforeFix - level2And3ShareAmountBeforeFix; // Alice should receive the remainder
// Validate the share amounts and balances
assertEquals(500000000, level1ShareAmountBeforeFix);
assertEquals(1000000000, level2And3ShareAmountBeforeFix);
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance-expectedFounderRewardBeforeFix);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance-level1ShareAmountBeforeFix);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance-level2And3ShareAmountBeforeFix);
// Orphan the latest block one last time
BlockUtils.orphanBlocks(repository, 1);
assertEquals(3, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Validate balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance-(expectedFounderRewardBeforeFix*2));
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance-(level1ShareAmountBeforeFix*2));
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance-(level2And3ShareAmountBeforeFix*2));
}
}
/** Test rewards for level 3 and 4 accounts */
@Test
public void testLevel3And4Rewards() throws DataException {
Common.useSettings("test-settings-v2-reward-levels.json");
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share online
byte[] bobRewardSharePrivateKey = AccountUtils.rewardShare(repository, "bob", "bob", 0);
PrivateKeyAccount bobRewardShareAccount = new PrivateKeyAccount(repository, bobRewardSharePrivateKey);
mintingAndOnlineAccounts.add(bobRewardShareAccount);
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 3 and 4
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(4) - 20; // 20 blocks before level 4, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect
assertEquals(3, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(3, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(3, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(4, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 3 or 4, we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Bob, Chloe, and Dilbert are 'online'.
* Bob and Chloe are level 3; Dilbert is level 4.
* One founder online (Alice, who is also level 3).
* No legacy QORA holders.
*
* Chloe, Bob and Dilbert should receive equal shares of the 10% block reward for level 3 and 4
* Alice should receive the remainder (90%)
*/
// We are after the shareBinFix feature trigger, so we expect level 3 and 4 to share the same reward (10%)
final int level3And4SharePercent = 10_00; // 10%
final long level3And4ShareAmount = (blockReward * level3And4SharePercent) / 100L / 100L;
final long expectedReward = level3And4ShareAmount / 3; // The reward is split between Bob, Chloe, and Dilbert
final long expectedFounderReward = blockReward - level3And4ShareAmount; // Alice should receive the remainder
// Validate the balances to ensure that the correct post-shareBinFix distribution is being applied
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance+expectedReward);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedReward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedReward);
}
}
/** Test rewards for level 5 and 6 accounts */
@Test
public void testLevel5And6Rewards() throws DataException {
Common.useSettings("test-settings-v2-reward-levels.json");
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share not initially online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 5 and 6
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(6) - 20; // 20 blocks before level 6, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Bob self-share now comes online
byte[] bobRewardSharePrivateKey = AccountUtils.rewardShare(repository, "bob", "bob", 0);
PrivateKeyAccount bobRewardShareAccount = new PrivateKeyAccount(repository, bobRewardSharePrivateKey);
mintingAndOnlineAccounts.add(bobRewardShareAccount);
// Ensure that the levels are as we expect
assertEquals(5, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(5, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(6, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 5 or 6 (except Bob who has only just started minting, so is at level 1), we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Bob, Chloe, and Dilbert are 'online'.
* Bob is level 1; Chloe is level 5; Dilbert is level 6.
* One founder online (Alice, who is also level 5).
* No legacy QORA holders.
*
* Chloe and Dilbert should receive equal shares of the 15% block reward for level 5 and 6
* Bob should receive all of the level 1 and 2 reward (5%)
* Alice should receive the remainder (80%)
*/
// We are after the shareBinFix feature trigger, so we expect level 5 and 6 to share the same reward (15%)
final int level1And2SharePercent = 5_00; // 5%
final int level5And6SharePercent = 15_00; // 10%
final long level1And2ShareAmount = (blockReward * level1And2SharePercent) / 100L / 100L;
final long level5And6ShareAmount = (blockReward * level5And6SharePercent) / 100L / 100L;
final long expectedLevel1And2Reward = level1And2ShareAmount; // The reward is given entirely to Bob
final long expectedLevel5And6Reward = level5And6ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level1And2ShareAmount - level5And6ShareAmount; // Alice should receive the remainder
// Validate the balances to ensure that the correct post-shareBinFix distribution is being applied
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance+expectedLevel1And2Reward);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel5And6Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel5And6Reward);
}
}
/** Test rewards for level 7 and 8 accounts */
@Test
public void testLevel7And8Rewards() throws DataException {
Common.useSettings("test-settings-v2-reward-levels.json");
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share NOT online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 7 and 8
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(8) - 20; // 20 blocks before level 8, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect
assertEquals(7, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(8, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 7 or 8 (except Bob who has only just started minting, so is at level 1), we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Chloe, and Dilbert are 'online'.
* Chloe is level 7; Dilbert is level 8.
* One founder online (Alice, who is also level 7).
* No legacy QORA holders.
*
* Chloe and Dilbert should receive equal shares of the 20% block reward for level 7 and 8
* Alice should receive the remainder (80%)
*/
// We are after the shareBinFix feature trigger, so we expect level 7 and 8 to share the same reward (20%)
final int level7And8SharePercent = 20_00; // 20%
final long level7And8ShareAmount = (blockReward * level7And8SharePercent) / 100L / 100L;
final long expectedLevel7And8Reward = level7And8ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level7And8ShareAmount; // Alice should receive the remainder
// Validate the balances to ensure that the correct post-shareBinFix distribution is being applied
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel7And8Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel7And8Reward);
}
}
/** Test rewards for level 9 and 10 accounts */
@Test
public void testLevel9And10Rewards() throws DataException {
Common.useSettings("test-settings-v2-reward-levels.json");
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share not initially online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 9 and 10
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(10) - 20; // 20 blocks before level 10, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Bob self-share now comes online
byte[] bobRewardSharePrivateKey = AccountUtils.rewardShare(repository, "bob", "bob", 0);
PrivateKeyAccount bobRewardShareAccount = new PrivateKeyAccount(repository, bobRewardSharePrivateKey);
mintingAndOnlineAccounts.add(bobRewardShareAccount);
// Ensure that the levels are as we expect
assertEquals(9, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(9, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(10, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 7 or 8 (except Bob who has only just started minting, so is at level 1), we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Bob, Chloe, and Dilbert are 'online'.
* Bob is level 1; Chloe is level 9; Dilbert is level 10.
* One founder online (Alice, who is also level 9).
* No legacy QORA holders.
*
* Chloe and Dilbert should receive equal shares of the 25% block reward for level 9 and 10
* Bob should receive all of the level 1 and 2 reward (5%)
* Alice should receive the remainder (70%)
*/
// We are after the shareBinFix feature trigger, so we expect level 9 and 10 to share the same reward (25%)
final int level1And2SharePercent = 5_00; // 5%
final int level9And10SharePercent = 25_00; // 25%
final long level1And2ShareAmount = (blockReward * level1And2SharePercent) / 100L / 100L;
final long level9And10ShareAmount = (blockReward * level9And10SharePercent) / 100L / 100L;
final long expectedLevel1And2Reward = level1And2ShareAmount; // The reward is given entirely to Bob
final long expectedLevel9And10Reward = level9And10ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level1And2ShareAmount - level9And10ShareAmount; // Alice should receive the remainder
// Validate the balances to ensure that the correct post-shareBinFix distribution is being applied
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance+expectedLevel1And2Reward);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel9And10Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel9And10Reward);
}
}
private int getFlags(Repository repository, String name) throws DataException {
TestAccount testAccount = Common.getTestAccount(repository, name);
return repository.getAccountRepository().getAccount(testAccount.getAddress()).getFlags();
}
}

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,75 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 6,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": 100 },
{ "type": "ACCOUNT_LEVEL", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "level": 1 },
{ "type": "ACCOUNT_LEVEL", "target": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "level": 1 },
{ "type": "ACCOUNT_LEVEL", "target": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "level": 1 },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 2 }
]
}
}

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -45,7 +45,10 @@
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"calcChainWeightTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,7 @@
{
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-reward-levels.json",
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
}

148
tools/block-timings.sh Executable file
View File

@@ -0,0 +1,148 @@
#!/usr/bin/env bash
start_height=$1
count=$2
target=$3
deviation=$4
power=$5
if [ -z "${start_height}" ]; then
echo
echo "Error: missing start height."
echo
echo "Usage:"
echo "block-timings.sh <startheight> [count] [target] [deviation] [power]"
echo
echo "startheight: a block height, preferably within the untrimmed range, to avoid data gaps"
echo "count: the number of blocks to request and analyse after the start height. Default: 100"
echo "target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000"
echo "deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000"
echo "power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2"
echo
exit
fi
count=${count:=100}
target=${target:=60000}
deviation=${deviation:=30000}
power=${power:=0.2}
finish_height=$((start_height + count - 1))
height=$start_height
echo "Settings:"
echo "Target time offset: ${target}"
echo "Deviation: ${deviation}"
echo "Power transform: ${power}"
echo
function calculate_time_offset {
local key_distance_ratio=$1
local transformed=$( echo "" | awk "END {print ${key_distance_ratio} ^ ${power}}")
local time_offset=$(echo "${deviation}*2*${transformed}" | bc)
time_offset=${time_offset%.*}
echo $time_offset
}
function fetch_and_process_blocks {
echo "Fetching blocks from height ${start_height} to ${finish_height}..."
echo
total_time_offset=0
errors=0
while [ "${height}" -le "${finish_height}" ]; do
block_minting_info=$(curl -s "http://localhost:12391/blocks/byheight/${height}/mintinginfo")
error=$(echo "${block_minting_info}" | jq -r .error)
if [ "${error}" != "null" ]; then
echo "Error fetching minting info for block ${height}"
echo
errors=$((errors+1))
height=$((height+1))
continue;
fi
# Parse minting info
minter_level=$(echo "${block_minting_info}" | jq -r .minterLevel)
online_accounts_count=$(echo "${block_minting_info}" | jq -r .onlineAccountsCount)
key_distance_ratio=$(echo "${block_minting_info}" | jq -r .keyDistanceRatio)
time_delta=$(echo "${block_minting_info}" | jq -r .timeDelta)
time_offset=$(calculate_time_offset "${key_distance_ratio}")
block_time=$((target-deviation+time_offset))
echo "=== BLOCK ${height} ==="
echo "Minter level: ${minter_level}"
echo "Online accounts: ${online_accounts_count}"
echo "Key distance ratio: ${key_distance_ratio}"
echo "Time offset: ${time_offset}"
echo "Block time (real): ${time_delta}"
echo "Block time (calculated): ${block_time}"
if [ "${time_delta}" -ne "${block_time}" ]; then
echo "WARNING: Block time mismatch. This is to be expected when using custom settings."
fi
echo
total_time_offset=$((total_time_offset+block_time))
height=$((height+1))
done
adjusted_count=$((count-errors))
if [ "${adjusted_count}" -eq 0 ]; then
echo "No blocks were retrieved."
echo
exit;
fi
mean_time_offset=$((total_time_offset/adjusted_count))
time_offset_diff=$((mean_time_offset-target))
echo "==================="
echo "===== SUMMARY ====="
echo "==================="
echo "Total blocks retrieved: ${adjusted_count}"
echo "Total blocks failed: ${errors}"
echo "Mean time offset: ${mean_time_offset}ms"
echo "Target time offset: ${target}ms"
echo "Difference from target: ${time_offset_diff}ms"
echo
}
function estimate_key_distance_ratio_for_level {
local level=$1
local example_key_distance="0.5"
echo "(${example_key_distance}/${level})"
}
function estimate_block_timestamps {
min_block_time=9999999
max_block_time=0
echo "===== BLOCK TIME ESTIMATES ====="
for level in {1..10}; do
example_key_distance_ratio=$(estimate_key_distance_ratio_for_level "${level}")
time_offset=$(calculate_time_offset "${example_key_distance_ratio}")
block_time=$((target-deviation+time_offset))
if [ "${block_time}" -gt "${max_block_time}" ]; then
max_block_time=${block_time}
fi
if [ "${block_time}" -lt "${min_block_time}" ]; then
min_block_time=${block_time}
fi
echo "Level: ${level}, time offset: ${time_offset}, block time: ${block_time}"
done
block_time_range=$((max_block_time-min_block_time))
echo "Range: ${block_time_range}"
echo
}
fetch_and_process_blocks
estimate_block_timestamps

View File

@@ -57,9 +57,11 @@ $timestamp *= 1000; # Convert to milliseconds
# locate sha256 utility
my $SHA256 = `which sha256sum || which sha256`;
chomp $SHA256;
die("Can't find sha256sum or sha256\n") unless length($SHA256) > 0;
# SHA256 of actual update file
my $sha256 = `git show auto-update-${commit_hash}:${project}.update | ${SHA256}`;
my $sha256 = `git show auto-update-${commit_hash}:${project}.update | ${SHA256} | head -c 64`;
die("Can't calculate SHA256 of ${project}.update\n") unless $sha256 =~ m/(\S{64})/;
chomp $sha256;