Compare commits

...

215 Commits

Author SHA1 Message Date
CalDescent
2479f2d65d Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-08-27 09:45:56 +01:00
CalDescent
9056cb7026 Increased atStatesPruneBatchSize from 10 to 25. 2021-08-27 09:45:56 +01:00
CalDescent
cd9d9b31ef Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
ff841c28e3 Updated tests to use the renamed method. 2021-08-27 09:45:56 +01:00
CalDescent
ca1379d9f8 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-08-27 09:45:56 +01:00
CalDescent
5127f94423 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-08-27 09:45:56 +01:00
CalDescent
f5910ab950 Break out of the AT pruning inner loops if we're stopping the app. 2021-08-27 09:45:56 +01:00
CalDescent
22efaccd4a Fixed NPE introduced in earlier commit. 2021-08-27 09:45:56 +01:00
CalDescent
c8466a2e7a Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-08-27 09:45:56 +01:00
CalDescent
209a9fa8c3 Rework of Blockchain.validate() to account for pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
bc1af12655 Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-08-27 09:45:55 +01:00
CalDescent
e7e4cb7579 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-08-27 09:45:55 +01:00
CalDescent
1b39db664c Added missing ATStatesHeightIndex to the reshape code.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
2021-08-27 08:54:46 +01:00
CalDescent
b4f980b349 Restrict lists API endpoints to local/apiKey requests only. 2021-08-12 19:52:49 +01:00
CalDescent
673f23b6a0 Improvement to commit f71516f
Now only skipping the HTLC redemption if the AT is finished and the balance has been redeemed by the buyer. This allows HTLCs to be refunded for ATs that have been refunded or cancelled.
2021-08-12 08:29:52 +01:00
QuickMythril
8c325f3a8a original design 2021-08-11 18:18:10 -04:00
CalDescent
f71516f36f Skip finished ATs in the refund API endpoints. 2021-08-11 21:26:29 +01:00
CalDescent
1752386a6c Fixed logging errors in previous commit. 2021-08-11 20:33:54 +01:00
CalDescent
112675c782 Better handling of RPC errors.
Previously, if an error was returned from an Electrum server (such as "server busy") it would throw a NetworkException that would be caught outside of the server loop and cause the entire request to fail.

Instead of throwing an exception, I am now logging the error and returning null, in the same way we do for IOException and NoSuchElementException further up in the same method.

This allows the caller - most likely connectedRpc() - to move on to the next server in the list and try again.

This should fix an issue seen where a "server busy" response from a single server was essentially breaking our implementation, as we would give up altogether instead of trying another server.
2021-08-11 19:22:53 +01:00
CalDescent
3b6ba7641d Updated icon for Qortal.exe 2021-08-10 19:35:03 +01:00
CalDescent
477a35a685 Fixed response schema for GET /lists/blacklist/addresses endpoint 2021-08-10 08:43:47 +01:00
CalDescent
2a0a39a95a Avoid creation of lists directory until the first item is added to a list. 2021-08-09 23:35:32 +01:00
CalDescent
dfc77db51d Merge pull request #57 from ScythianQortal/translations
Added Hungarian translations and reorganised existing translations
2021-08-09 10:05:13 +01:00
CalDescent
c9596fd8c4 Catch exceptions thrown during GUI initialization.
This is a workaround for an UnsupportedOperationException thrown when using X2Go, due to PERPIXEL_TRANSLUCENT translucency being unsupported in splashDialog.setBackground(). We could choose to use a different version of the splash screen with an opaque background in these cases, but it is low priority.
2021-08-09 10:02:24 +01:00
CalDescent
78373f3746 HTLC redeem/refund APIs switched from GET to POST. 2021-08-08 10:29:15 +01:00
CalDescent
ebc3db8aed Default file path for repository data imports set to "qortal-backup/TradeBotStates.json". This allows the trade bot backup to be imported in a single click, and can now be potentially added as a button in the UI. 2021-08-08 10:20:44 +01:00
CalDescent
756601c1ce Initialize to an empty list.
This fixes various bugs caused by the list being null when no blacklist JSON file was available.
2021-08-08 08:41:13 +01:00
CalDescent
8bb5077e76 Catch occasional NPE when setting tray icon. 2021-08-08 08:29:29 +01:00
CalDescent
5b85f01427 Added defensiveness to list management methods in ResourceList.java 2021-08-08 08:28:34 +01:00
CalDescent
a7d594e566 Log the AT states reshape progress, as it seems to be taking a very long time. 2021-08-07 19:18:20 +01:00
CalDescent
481e6671c2 Added GET /lists/blacklist/addresses API endpoint
This returns a JSON array containing the blacklisted addresses.
2021-08-07 16:27:16 +01:00
Scythian
b890e02a6a Added new TransactionValidity keys
Added ADDRESS_ABOVE_RATE_LIMIT and DUPLICATE_MESSAGE ValidationResults to localeLang translation keys
2021-08-07 15:09:48 +01:00
Scythian
4772840b4c Reorganised translations
Updated the "localeLang" files with new keys and removed old unused keys for English, German, Dutch, Italian, Finnish, Hungarian, Russian and Chinese translations
2021-08-07 14:05:10 +01:00
CalDescent
cd7adc997b Prevent duplicate entries in a list. 2021-08-07 11:32:49 +01:00
CalDescent
9fdc901b7a Added POST /lists/blacklist/addresses and DELETE /lists/blacklist/addresses API endpoints.
These are the same as the /lists/blacklist/address/{address} endpoints but allow a JSON array of addresses to be specified in the request body. They currently return true if
2021-08-07 11:31:45 +01:00
Scythian
76ec3473d6 Updated TransactionValidity keys
Added ADDRESS_IN_BLACKLIST ValidationResult to TransactionValidity translation keys
2021-08-07 10:47:48 +01:00
CalDescent
b29ae67501 Apply the address blacklist to chat transactions.
This is based on code originally written by @DrewMPeacock
2021-08-07 10:31:56 +01:00
CalDescent
24f1fb566d Initial implementation of resource lists
The ResourceList class creates or updates a list for the purpose of tracking resources on the Qortal network. This can be used for local blocking, or even for curating and sharing content lists. Lists are backed off to JSON files (in the lists folder) to ease sharing between nodes and users.

This first implementation allows access to an address blacklist only, but has been written in such a way that other lists can be easily added. This might be needed in the future, e.g. to blacklist a group, a poll, or some hosted data. It could also be used by community members to curate lists of favourite or problematic content, which could then be shared or even subscribed to on the chain by other users.
2021-08-07 10:20:14 +01:00
CalDescent
a253294890 Ensure frozen ATs are still executed every block.
We currently want to execute frozen ATs, to maintain backwards support. We could optionally choose to stop executing them later, via a hard fork.
2021-08-06 20:01:59 +01:00
Scythian
0b53de1bb6 Added Hungarian translations 2021-08-06 19:28:56 +01:00
Scythian
746c68c9f6 Reorganised translations
Added new keys and removed old unused keys
Localised the "Build version" string in the SysTray
2021-08-06 19:27:28 +01:00
CalDescent
ec008b4a16 Merge branch 'AT-sleep-until-message' 2021-08-04 19:00:24 +01:00
CalDescent
1d65e34fe5 Revert "Added DogecoinACCTv2 and DogecoinACCTv2TradeBot"
This reverts commit 797dff4752.
2021-08-04 18:59:36 +01:00
CalDescent
8ae78703ca Revert "Initial attempt at adding "sleep until message" functionality to DOGE ACCTv2."
This reverts commit a1c61a1146.
2021-08-04 18:59:30 +01:00
CalDescent
bd4b9a9fd3 Modified .gitignore to allow multiple testnets to exist by adding a number or other suffix. 2021-08-04 18:56:16 +01:00
CalDescent
f09677d376 Added inputs, outputs and feeAmount to /crosschain//walletbalance endpoints
The inputs and outputs contain a simpler version than the ones in the raw transaction, consisting of `address`, `amount`, and `addressInWallet`. The latter of the three is to know whether the address is one that is derived from the supplied xpub master public key.
2021-08-04 18:54:36 +01:00
CalDescent
f669e3f6c4 Fixed Dogecoin tests. 2021-08-04 18:48:59 +01:00
CalDescent
961c5ea962 Treat zero as null in sleepUntilHeight AT data. This is needed because we are unable to call setSleepUntilHeight() with a null value due to the datatype used in the CIYAM AT library. An alternate option would be to fork the AT library and use an Integer or Long rather than an int, but since we don't have a block zero, this is still a valid thing to check even when using that approach. 2021-08-04 09:22:17 +01:00
CalDescent
a1c61a1146 Initial attempt at adding "sleep until message" functionality to DOGE ACCTv2. 2021-08-02 20:08:53 +01:00
CalDescent
797dff4752 Added DogecoinACCTv2 and DogecoinACCTv2TradeBot 2021-08-02 20:07:34 +01:00
CalDescent
711ad638b8 Renamed Chinese translation files.
zh_SC renamed to zh_CN, and zh_TC renamed to zh_TW. This is necessary for the localization library to locate the files correctly.
2021-08-02 09:24:38 +01:00
CalDescent
4956c3328c Updated AdvancedInstaller project for v1.6.0 2021-08-01 19:14:55 +01:00
CalDescent
96a82381d1 Bump version to 1.6.0 2021-08-01 18:19:59 +01:00
CalDescent
68190c8c76 Fixed a build error relating to using an int rather than Integer in the CIYAM AT library. Solved for now by using 0 instead of null, but will review this again before release. 2021-08-01 18:07:19 +01:00
CalDescent
dde47bc1fc Fixed build errors by adding sleepUntilMessageTimestamp to recent method additions. 2021-08-01 18:05:58 +01:00
CalDescent
744deaed8d Fixed merge issue due to differing db schemas. 2021-08-01 10:39:34 +01:00
CalDescent
a62910c8b6 Merge branch 'master' into AT-sleep-until-message 2021-08-01 10:30:52 +01:00
CalDescent
3c6d9a4b8e Merge branch 'new-coins' 2021-07-31 21:27:40 +01:00
CalDescent
3073388403 Fix for missing transactions in getWalletTransactions() response
The previous criteria was to stop searching for more leaf keys as soon as we found a batch of keys with no transactions, but it seems that there are occasions when subsequent batches do actually contain transactions. The solution/workaround is to require 5 consecutive empty batches before giving up. There may be ways to improve this further by copying approaches from other BIP32 implementations, but this is a quick fix that should solve the problem for now.
2021-07-31 18:10:24 +01:00
CalDescent
67f856c997 Increased minimum order size to 3 DOGE, as some transactions for 2.1 DOGE were being rejected due to the "dust" error. 2021-07-31 12:20:08 +01:00
CalDescent
742fd0b444 Added /crosschain/htlc/refundAll API endpoint
This is the equivalent of the redeemAll API but can be used from the buyer's side to get their LTC or DOGE back in the event of a problem.
2021-07-31 12:19:12 +01:00
CalDescent
e1d69c0eae Removed /refund/{ataddress}/{receivingAddress} API
This was causing confusion and isn't generally needed.
2021-07-31 11:55:48 +01:00
CalDescent
49d4190615 Support multiple blockchains in the /crosschain/htlc APIs.
This involved a small refactor of the ACCT code to expose findSecretA() in a more generic way. Bitcoin is disabled for refunding and redeeming as it uses a legacy approach that we no longer support. The {blockchain} URL parameter has also been removed from the redeem and refund APIs, because it can be obtained from the ACCT via the code hash in the AT.
2021-07-31 11:11:30 +01:00
CalDescent
64d39765ca Merge pull request #56 from DrewMPeacock/update-splash
Update task tray icon files. Now with transparency™
2021-07-30 10:51:44 +01:00
Sir.Galahad
aca8f64415 Update task tray icon files. Now with transparency™ 2021-07-30 03:43:31 -06:00
CalDescent
855b600268 testBlockHeightSpeed() reduced from 30k to 10k blocks.
This was intermittently failing on github and I suspect it may have been hitting memory limits.
2021-07-30 09:02:15 +01:00
CalDescent
476d613e20 Merge branch 'master' of github.com:Qortal/qortal 2021-07-30 09:00:54 +01:00
CalDescent
fb8a4d0a41 Merge pull request #55 from DrewMPeacock/update-splash
Update Qortal node graphics.
2021-07-30 08:30:15 +01:00
CalDescent
130f3f6d41 Added code to parse variable length block headers, necessary for DOGE. 2021-07-29 09:00:55 +01:00
CalDescent
ed997af043 Limit order size to a minimum of 2 DOGE.
The "dust" threshold is around 1 DOGE - meaning orders below this size cannot be refunded or redeemed. The simplest solution is to prevent orders of this size being placed to begin with.
2021-07-29 08:47:45 +01:00
Sir.Galahad
3c47f6917a Merge remote-tracking branch 'qm/status-on-icon' into update-splash 2021-07-27 19:46:34 -06:00
Sir.Galahad
e32a486493 Update splash screen image. 2021-07-27 19:42:00 -06:00
QuickMythril
208da935a1 updating icons
design credit: Haoshiro
2021-07-27 07:20:15 -04:00
QuickMythril
1dda9a875e updating icons
design credit: Haoshiro
2021-07-27 07:19:58 -04:00
CalDescent
b26175b7c6 Set DOGE fees to sensible values for mainnet. These can probably be optimized. 2021-07-26 19:03:33 +01:00
CalDescent
ffc6befb38 Fixed some missing DOGECOIN references 2021-07-26 09:21:56 +01:00
CalDescent
9df7c96d08 Added Dogecoin TradeBot and ACCT 2021-07-25 18:45:10 +01:00
CalDescent
32fa66f0a2 Merge pull request #54 from DrewMPeacock/master
Include AT address in /trades API call.
2021-07-23 08:50:27 +01:00
Sir.Galahad
7153ed022c Include AT address in /trades API call.
Include Seller Qortal address in /trades API call.
Include Buyer Qortal Receiving address in /trades API call.
2021-07-22 22:08:13 -06:00
CalDescent
50e4e71abb Added DOGE wallet. 2021-07-18 16:55:39 +01:00
CalDescent
d6e65a3d63 Removed version number from qortal.exe and qortal.zip
Recently we have stopped including the version number in the zip and exe files uploaded to github, as this allows us to use the "https://github.com/Qortal/*/releases/latest/download/*" redirect for all 3 files when linking from the qortal.org website. Previously, it could only be done for the JAR since this was the only file that didn't contain a version number. This avoids having to update the website every time we distribute a new release.

Note that this currently requires that the Qortal-x.y.z.exe file created by AdvancedInstaller is renamed to qortal.exe before running ./build-release.sh. If you forget to rename, the script will exit with a warning that the file couldn't be found.
2021-07-18 10:36:04 +01:00
CalDescent
79691541ae Merge pull request #51 from JaymenChou/patch-2
Update and rename SysTray_zh.properties to SysTray_zh_SC.properties
2021-07-08 23:06:19 +01:00
CalDescent
05d0542875 Merge pull request #50 from JaymenChou/patch-1
Create SysTray_zh_TC.properties
2021-07-08 23:06:03 +01:00
CalDescent
1d22b39a1d Merge pull request #52 from marcomoesman/master
Create Dutch (nl_NL) translations
2021-07-08 23:05:40 +01:00
CalDescent
549b68cf71 Merge pull request #53 from DrewMPeacock/master
Removes the flawed BIP-39 implementation in the core.
2021-07-08 23:05:15 +01:00
CalDescent
55f87de2e0 Updated AdvancedInstaller project for v1.5.6 2021-07-08 19:19:35 +01:00
CalDescent
b8424e20aa Bump version to 1.5.6 2021-07-08 18:24:11 +01:00
Sir.Galahad
bbe3a30e77 Removes the flawed BIP-39 implementation in the core.
Removes API calls for /mnemonic.

Readout for VanityGen.java now excludes a BIP-39 seed-phrase and only gives a raw private key.
2021-07-08 02:24:31 -06:00
Marco Moesman
39d8750ef9 Merge branch 'Qortal:master' into master 2021-07-01 10:24:03 +02:00
CalDescent
52b0c244a8 Extend CHECKPOINT_LOCK to HSQLDBSaver.execute()
This is used when saving new data to the db, so also needs to be blocked if we are checkpointing or deciding whether to checkpoint.
2021-06-28 19:24:53 +01:00
CalDescent
ee95a00ce2 Hopeful fix for "Performing repository CHECKPOINT..." deadlock.
This is probably our number one reliability issue at the moment, and has been a problem for a very long time.

The existing CHECKPOINT_LOCK would prevent new connections being created when we are checkpointing or about to checkpoint. However, in many cases we obtain the db connection early on and then don't perform any queries until later. An example would be in synchronization, where the connection is obtained at the start of the process and then retained throughout the sync round. My suspicion is that we were encountering this series of events:
1. Open connection to database
2. Call maybeCheckpoint() and confirm there are no active transactions
3. An existing connection starts a new transaction
4. Checkpointing is performed, but deadlocks due to the in-progress transaction

This potential fix includes preparedStatement.execute() in the CHECKPOINT_LOCK, to block any new transactions being started when we are locked for checkpointing. It is fairly high risk so we need to build some confidence in this before releasing it.
2021-06-28 09:13:36 +01:00
QuickMythril
11566ec923 set icon on status change 2021-06-27 03:45:15 -04:00
QuickMythril
a78ff08202 add setTrayIcon function 2021-06-27 03:44:29 -04:00
QuickMythril
ceb3969c8b load icons into gui 2021-06-27 03:44:25 -04:00
QuickMythril
6f048ef40e add status icons 2021-06-27 03:41:49 -04:00
Marco Moesman
aff4f6c859 Create TransactionValidity_nl.properties 2021-06-24 19:13:55 +02:00
Marco Moesman
1f8f73fa30 Create ApiError_nl.properties 2021-06-24 18:44:53 +02:00
Marco Moesman
620d6624a9 Create SysTray_nl.properties 2021-06-24 18:30:50 +02:00
JaymenChou
287f42ae64 Update and rename SysTray_zh.properties to SysTray_zh_SC.properties
Rename to zh_SC for better distinguish between zh_SC (Simple Chinese)and zh_TC(Traditional Chinese)
Rephrase some of the words for better understanding.
2021-06-24 11:42:48 +08:00
JaymenChou
d976c97d13 Create SysTray_zh_TC.properties 2021-06-24 11:31:46 +08:00
CalDescent
6d549b0754 Updated AdvancedInstaller project for v1.5.5 2021-06-23 19:57:03 +01:00
CalDescent
02dd64558f Bump version to 1.5.5 2021-06-23 18:21:55 +01:00
CalDescent
d25e98d9c4 Include peer connection ID in recently created log message. 2021-06-20 10:26:14 +01:00
CalDescent
227cdc1ec8 Log each sync attempt when our blockchain isn't up to date
Without this it can appear as though nothing is happening for a while after the app launches.
2021-06-20 07:25:25 +01:00
CalDescent
2c585a9328 Upper connection time limit reduced from 60 mins to 20 mins.
Now that we aren't disconnecting mid sync, we can get away with more frequent disconnections. This brings the average connection length to around 9 mins.
2021-06-19 17:25:50 +01:00
CalDescent
45b0d9e19b Added more initial peers, submitted by CWDSYSTEMS 2021-06-19 16:19:53 +01:00
CalDescent
026a4b896c Added BTC and LTC electrum nodes submitted by CWDSYSTEMS 2021-06-19 14:50:18 +01:00
CalDescent
78237fcd11 Don't intentionally disconnect peers if we are currently syncing with them. 2021-06-19 13:12:16 +01:00
CalDescent
73cc3dcb92 Force a disconnect of each peer when the connection age reaches the maximum allowed time.
Connection limits are defined in settings (denominated in seconds):
"minPeerConnectionTime": 120,
"maxPeerConnectionTime": 3600

Peers will disconnect after a randomly chosen amount of time between the min and the max. The default range is 2 minutes to 1 hour, as above.

This encourages nodes to connect to a wider range of peers across the course of each day, rather than staying connected to an "island" of peers for an extended period of time. Hopefully this will reduce the amount of parallel chains that can form due to permanently connected clusters of peers.

We may find that we need to reduce the defaults to get optimal results, however it is best to do this incrementally, with the option for reducing further via each node's settings. Being too aggressive here could cause some of the earlier problems (e.g. 20% missing blocks minted) to reappear. We can re-evaluate this in the next version. Note that if defaults are reduced significantly, we may need to add code to prevent this from happening mid-sync. With higher defaults, this is less of an issue.

Thanks to @szisti for supplying some base code for this commit, and also to @CWDSYSTEMS for diagnosing the original problem.
2021-06-19 13:03:46 +01:00
CalDescent
4cff03e7fe Include "size" value in the "Synchronized with peer" logs.
This indicates the size of the re-org/rollback that was required in order to perform this sync operation. It is only included if it's greater than 0 blocks.
2021-06-19 09:04:14 +01:00
CalDescent
280f7814aa Merge branch 'master' of github.com:Qortal/qortal 2021-06-14 18:49:55 +01:00
CalDescent
3174681bd8 Removed /src/main/resources/log*.properties from .gitignore
We must be careful not to add files to the resources folder accidentally, given that a bundled log4j2.properties file is used in preference to the user's copy. By keeping this out of gitignore, it becomes more obvious if a file is added, and it can then be caught and removed before a release.
2021-06-14 18:49:24 +01:00
CalDescent
853f80b928 Updates to build-zip.sh and build-release.sh
There were necessary for these scripts to function in my build environment (Mac OSX). This may give errors when running in other environments, but we can deal with that in future, when others need to use these scripts.
2021-06-14 18:46:21 +01:00
CalDescent
8bdad377d7 Removed outdated qortal.jar from "WindowsInstaller/Install Files" directory.
Including an older JAR in the source code only leads to confusion, because a zip of the source code is automatically included with each github release. From what I can see, there is no need for it to be here. Added to .gitignore so we have the option of keeping a local copy.
2021-06-14 18:41:27 +01:00
CalDescent
9e1c2a5bd1 Updated AdvancedInstaller project for v1.5.4 2021-06-14 18:39:46 +01:00
CalDescent
b1777b6011 Bump version to 1.5.4 2021-06-13 19:22:16 +01:00
CalDescent
e3923b7b22 Fixed issue causing frequent disconnects (found by szisti)
When sending or requesting more than 1000 online accounts, peers would be disconnected with an EOF or connection reset error due to an intentional null response. This response has been removed and it will instead now only send the first 1000 accounts, which prevents the disconnections from occurring.

In theory, these accounts should be in a different order on each node, so the 1000 limit should still result in a fairly even propagation of accounts. However, we may want to consider increasing this limit, to maximise the propagation speed.

Thanks to szisti for tracking this one down.
2021-06-11 19:10:04 +01:00
CalDescent
a43993e3ec Use placeholder build timestamp and build version when building without mvn package (e.g. from within an IDE)
This fixes an exception thrown when running directly in IntelliJ, as previously we relied on mvn package to parse the commit hash and timestamp.
2021-06-11 19:01:35 +01:00
CalDescent
319e64bacc Defend against an edge case NPE in the chat messages websocket. 2021-06-06 10:34:20 +01:00
CalDescent
ecf044bed1 Removed qortal-backup folder from git 2021-06-06 10:29:58 +01:00
CalDescent
76e1de38e8 Workaround for issue where sometimes an AT stays in "TRADING" mode even after it is marked as finished. This caused Bob's tradebot to enter BOB_REFUNDED mode instead of redeeming the LTC. This workaround treats "TRADING" as "REDEEMED" as long as the AT is finished. It will still enter the BOB_REFUNDED state if the AT's trade state is "REFUNDED" or "CANCELLED", to prevent it trying to redeem LTC without the secret. Longer term we need to prevent the AT itself from getting in this state to begin with, but this should at least solve the LTC redemption problem that occurs as a result. 2021-06-05 12:31:49 +01:00
CalDescent
1648a74ed7 Removed code which auto deletes trade bot data if it can't locate the AT after 24 hours. It's not a good idea to ever delete trade bot data, since it can contain private keys necessary to redeem or refund LTC. We have seen at least one instance of this where the trade bot data was deleted for an active trade. We still have the auto backup in these cases, so the keys are recoverable, but it's safest to avoid any auto deletions. 2021-06-05 11:25:05 +01:00
CalDescent
c443187d0b Reduce log spam by logging the total number of expired unconfirmed transactions that are deleted, rather than each one individually. The individual deletion logs have been moved from INFO to DEBUG. 2021-06-01 20:06:37 +01:00
CalDescent
8c305d8390 Reduced log levels of recent synchronizer / controller log additions from INFO to DEBUG.
These are no longer necessary now that we have achieved good stability in the network.
2021-06-01 08:39:38 +01:00
CalDescent
0345c5c03b Updated AdvancedInstaller project for v1.5.3 2021-05-31 21:27:42 +01:00
CalDescent
cc6ac4c9d9 Bump version to 1.5.3 2021-05-31 17:36:21 +01:00
CalDescent
815934ff5c Added GET /crosschain/htlc/redeemAll/LITECOIN API
This loops through all sell orders and attempts to redeem the LTC from each one. It will return true if at least one was redeemed, or false if none are available to be redeemed. Details are logged to the log.txt file rather than returned in the API response.
2021-05-29 19:43:08 +01:00
CalDescent
5a84016a91 Merge pull request #39 from szisti/networking
Code formatting, connection age, and logging changes for networking
2021-05-28 10:09:07 +01:00
Istvan Szabo
bb0269f484 Converted time format 2021-05-28 08:53:01 +01:00
Istvan Szabo
1adc9349fc Added connection age to connected peers dto 2021-05-28 08:04:57 +01:00
Istvan Szabo
06215c83f2 Reduced log levels 2021-05-27 10:48:17 +01:00
Istvan Szabo
8a828137ee Removed code coverage report as it seems to conflict with tests randomly 2021-05-27 09:52:33 +01:00
Istvan Szabo
de4b1c8f09 Removed missed functional change 2021-05-27 09:15:32 +01:00
Istvan Szabo
265d40f04a Code formatting and logging changes for networking 2021-05-27 09:03:18 +01:00
szisti
b64e52c0c0 Automated testing (#38)
* added basic workflow

* Testing workflow

* renamed workflow file

* Disabled extremely slow test

* Disabled currently failing tests

* Added jacoco and updated workflow

* We cannot run gui tests headless

* Fixed jacoco configuration

* Updated job name in the workflow

* Adjusting workflow

* Testing maven caching

* Added logging for one of the jacoco related issues

* Updated coverage logging

Co-authored-by: Istvan Szabo <istvan.szabo@betvictor.com>
2021-05-26 11:27:46 +01:00
CalDescent
ac02e5c0a6 Merge pull request #37 from szisti/fix-cross-transaction-display
Fix cross chain transaction display
2021-05-26 08:39:51 +01:00
Istvan Szabo
427a415fbf Adjusted bitcoiny to convert transaction info into the new DTO 2021-05-25 23:57:54 +01:00
Istvan Szabo
9a3414aaa7 Added new DTO to store the data 2021-05-25 23:55:12 +01:00
CalDescent
c8897ecf9b Rewrite of HSQLDBATRepository.getBlockATStatesAtHeight() SQL query
The previous query was taking almost half a second to run each time, whereas the new version runs 10-100x faster. This was the main bottleneck with block serialization and should therefore allow for much faster syncing once rolled out to the network. Tested several thousand blocks to ensure that the results returned by the new query match those returned by the old one.
2021-05-24 19:52:20 +01:00
CalDescent
2c8b94d469 Always use the org.qortal.utils.Base58 implementation
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
2021-05-24 19:38:01 +01:00
CalDescent
36c1cfae51 Log the P2SH address when redeeming or refunding LTC via the API. 2021-05-24 19:00:04 +01:00
CalDescent
41ad78750e Don't allow QORT addresses to be used as the receiving address when redeeming LTC
This is probably more validation than is actually needed, but given that we use the same field for LTC and QORT receiving addresses in the database, it is best to be extra careful.
2021-05-24 18:59:41 +01:00
CalDescent
3eaa4d5b38 Added /crosschain/htlc/refund/LITECOIN/{ataddress}/{receivingAddress} API
This is the same as the /crosschain/htlc/refund/LITECOIN/{ataddress} API, but allows a custom destination address to be specified.
2021-05-23 18:52:03 +01:00
CalDescent
35176f9550 Added other files to .gitignore 2021-05-23 16:57:09 +01:00
CalDescent
eb2c7268ea Removed .DS_Store files. 2021-05-23 15:31:26 +01:00
CalDescent
80311355ae Added /blocks/signature/{signature}/data API
This returns serialized, base58 encoded data for the entire block. It is the same format as the data sent between nodes when synchronizing, with base58 encoding added so that it can be outputted cleanly in the API response.
2021-05-23 13:10:47 +01:00
CalDescent
39d1590ace Improved descriptions of the new API endpoints. 2021-05-22 14:16:14 +01:00
CalDescent
0b36b650a4 Added /redeem/LITECOIN/{ataddress} API
This is the equivalent of the refund API but can be used by the seller to redeem LTC from a stuck transaction, by supplying the associated AT address, There are no lockTime requirements; it is redeemable as soon as the buyer has redeemed the QORT and sent the secret to the seller.
2021-05-22 13:59:00 +01:00
CalDescent
39575e8542 Added /refund/LITECOIN/{ataddress} API
This is designed to be called by the buyer, and will force refund their P2SH transaction associated with the supplied AT. The tradebot responsible for this trade must be present in the user's db for this API access the necessary data. It must be called after lockTime has passed, which for LTC is currently 60 minutes from the time that the P2SH was funded. Trying to refund before this time will result in a FOREIGN_BLOCKCHAIN_TOO_SOON error.
2021-05-22 10:09:28 +01:00
CalDescent
326ef498b0 Added /crosschain/htlc/redeem/LITECOIN/{ataddress}/{tradePrivateKey}/{secret}/{receivingAddress} API
This can currently be used by either the buyer or the seller, but it requires the seller's trade private key & receiving address to be specified, along with the buyer's secret. Currently hardcoded to LITECOIN but I will aim to make this generic as we start adding more coins.
2021-05-22 09:51:57 +01:00
CalDescent
5148bad82e /crosschain/htlc APIs now take base58 encoded params instead of hex.
This makes them more compatible with the output of the /crosschain/tradebot and /crosschain/trade/{ataddress} APIs which is likely where most people will be retrieving data from, rather than the database itself.
2021-05-20 09:20:14 +01:00
CalDescent
518f02472f Added POST /crosschain/LitecoinACCTv1/redeemmessage API
This is similar to the BTC equivalent, but removes secretB as an input parameter. It also signs and broadcasts the transaction, because the wallet isn't needed for this. These transactions have to be signed using the tradePrivateKey from the tradebot data rather than any of the wallet's keys.

There are two other LitecoinACCTv1 APIs still to implement, but I will leave these until they are needed.
2021-05-20 07:59:19 +01:00
CalDescent
ee5a132eb2 Updated AdvancedInstaller project for v1.5.2 2021-05-17 20:31:28 +01:00
CalDescent
654dc5bff3 Bump version to 1.5.2 2021-05-17 17:02:38 +01:00
CalDescent
13dcf7f72a Added/updated some comments relating to a possible future optimization. 2021-05-16 11:03:11 +01:00
CalDescent
65c26f17df Reduced "Error while trying to find common block with peer" log from INFO to DEBUG when determining which peer to sync with. When performing the actual synchronization, use INFO logging as this is a more serious error. 2021-05-16 10:45:40 +01:00
CalDescent
3bedba71d5 Reduced frequency and level of some synchronizer logs. 2021-05-16 10:36:41 +01:00
CalDescent
1ba64d9745 Bumped bitcoinj version from 0.15.6 to 0.15.10 2021-05-16 10:00:28 +01:00
CalDescent
84bf570243 Added optional "maxtrades" parameter to /crosschain/price/{blockchain} API
This specifies the maximum number of trades to be used when calculating the price. Default: 10
2021-05-16 09:51:11 +01:00
CalDescent
28d50bccf9 Exclude peers if we don't have a complete set of their block summaries.
This tightens up the decision making by adding two requirements:

1. The peer must return the same number of summaries to the ones requested.
2. The peer must return a summary that matches its latest reported signature.

This ensures we are always making sync decisions based on accurate data, and removes peers that are currently mid re-org. This is probably more validation than is actually necessary, but it's best to be really thorough here so it is as optimized as possible.
2021-05-16 09:15:37 +01:00
CalDescent
66711c2e9d Require a complete sync in syncToPeerChain()
We have gone backwards and forwards on this one a lot recently, but now that stability has returned, it is best to tighten this up. Previously it was loosened to help reduce network load, but that is no longer a problem. With this stricter approach, it should prevent a node ending up in an incomplete state after syncing, which is the main cause of the shorter re-orgs we are seeing.
2021-05-16 08:45:23 +01:00
CalDescent
92d8c37d7d Added AT count to block debug logs. 2021-05-15 12:54:46 +01:00
CalDescent
5824f75669 Rework of the repository export and import functions.
The existing HSQL export/import (PERFORM EXPORT SCRIPT and PERFORM IMPORT SCRIPT) have been replaced with a custom JSON import and export. Whilst this is less generic, it has some significant advantages:

- When exporting data, it is now able to combine the exported data with any data that already exists in the backup file. This prevents a backup after a bootstrap from overwriting data from before the bootstrap, and removes the need for all of the "archive" files that we currently create.
- Adds support for partial imports, and updates. Previously an import would fail if any of the data being imported already existed in the db. It will now add new rows and update existing ones.
- The format and contents of the exported trade bot data now matches the output of the /crosschain/tradebot API.
- Data is retrieved without the need for a database lock, and therefore the export process is much faster and less invasive. This should prevent the lockups and other problems seen when using the trade portal.

For now, there are a couple of trade-offs to using this new approach:
- The minting key import/export has been temporarily removed until there is more time to transition it to this new format.
- Existing .script backups can no longer be imported using versions higher than 1.5.1.

Both of these can be solved by temporarily running version 1.5.1, performing the necessary imports/exports, then returning to the latest version. Longer term the minting keys export/import will be reimplemented using the JSON format.
2021-05-15 12:19:15 +01:00
CalDescent
deb8adafc9 Added org.json dependency.
The com.googlecode.json-simple dependency we use in other parts of the project isn't ideal for some of the more complex parsing.
2021-05-15 09:15:29 +01:00
CalDescent
d2649b237c Moved chain weight calculation log from DEBUG to TRACE. 2021-05-11 19:01:23 +01:00
CalDescent
6532c258f6 Reduced log spam. 2021-05-10 09:10:14 +01:00
CalDescent
83e2b10904 Merge branch 'ignore-old-versions' 2021-05-10 09:01:04 +01:00
CalDescent
26c1793d85 Added "allowConnectionsWithOlderPeerVersions" setting (default: true)
This controls whether to allow connections with peers below minPeerVersion.

If true, we won't sync with them but they can still sync with us, and will show in the peers list. This is the default, which allows older nodes to continue functioning, but prevents them from interfering with the sync behaviour of updated nodes.

If false, sync will be blocked both ways, and they will not appear in the peers list at all.
2021-05-10 09:00:42 +01:00
CalDescent
23a9eea26b Merge branch 'ignore-old-versions' 2021-05-09 23:02:35 +01:00
CalDescent
af9b536dd9 Moved version check above getMinBlockchainPeers() check, so that nodes with old versions aren't counted. 2021-05-09 23:00:51 +01:00
CalDescent
e4874f86f9 Merge branch 'block-timings' of github.com:Qortal/qortal into block-timings
# Conflicts:
#	src/main/java/org/qortal/api/model/BlockMintingInfo.java
#	src/main/java/org/qortal/api/resource/BlocksResource.java
#	tools/block-timings.sh
2021-05-09 19:25:33 +01:00
CalDescent
e300a957e4 Added online accounts count to /blocks/byheight/{height}/mintinginfo API and block-timings.sh script. 2021-05-09 19:25:05 +01:00
CalDescent
1c38afcd25 Slight reordering of vars. 2021-05-09 19:24:25 +01:00
CalDescent
a06faa7685 Updated usage info to reflect the fact that the "count" parameter is optional.
Usage:

block-timings.sh <startheight> [count] [target] [deviation] [power]
2021-05-09 19:24:25 +01:00
CalDescent
019ab2b21d Added tools/block-timings-sh which can be used to test out new block timings (specified in blockchain.json).
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.

Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]

startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
2021-05-09 19:24:25 +01:00
CalDescent
f6ba5f5d51 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-05-09 19:24:25 +01:00
CalDescent
c4cbb64643 Added "minPeerVersion" setting, and avoid syncing with peers on lower versions. 2021-05-09 17:38:07 +01:00
CalDescent
8260cec713 Added "maximumCount" parameter to HSQLDBATRepository.getMatchingFinalATStatesQuorum() and use it to limit the number of ATs being returned in the query.
Initially set to 10 when used by the /crosschain/price/{blockchain} API, so that the price is based on the last 10 trades rather than every trade that has ever taken place.
2021-05-09 15:56:15 +01:00
CalDescent
f4520e2752 Skip Block.logDebugInfo() altogether if the log level is more specific than DEBUG, to avoid wasting resources. 2021-05-09 09:00:53 +01:00
CalDescent
475802afbc Fixed divide by zero exception.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
2021-05-09 08:25:24 +01:00
Tom
a170668d9d Updated AdvancedInstaller project for v1.5.1 2021-05-07 09:58:15 +01:00
Tom
f8dac39076 Updated AdvancedInstaller project for v1.5.0
This includes updating AdoptOpenJDK to version 11.0.11.9, because 11.0.6.10 is no longer recommended or available in their archive. It also looks like I am using a newer version of AdvancedInstaller itself.
2021-05-07 09:40:38 +01:00
CalDescent
fe4ae61552 Added "maxRetries" setting.
This controls the maximum number of retry attempts if a peer fails to respond with the requested data.
2021-05-06 17:49:45 +01:00
CalDescent
0c3597f757 Bump version to 1.5.1 2021-05-05 18:41:05 +01:00
CalDescent
6109bdeafe Set go-live timestamp for same-length chain weight consensus: 1620579600000 2021-05-05 18:40:07 +01:00
CalDescent
6e9a61c4e5 Fixed logging issue where it would underreport the number of common blocks found when loading some from the cache. 2021-05-02 20:51:53 +01:00
CalDescent
8e244fd956 Fixed yet another bug with minChainLength. 2021-05-02 20:45:20 +01:00
CalDescent
2eb6771963 Adapted logging in comparePeers() to report correct values for both chain weight algorithms. 2021-05-02 20:26:51 +01:00
CalDescent
db77108054 Log the number of blocks used in Block.calcChainWeight()
This makes it easier to check that the new consensus code is being used, and that it is working correctly.
2021-05-02 19:59:32 +01:00
CalDescent
241e2bef85 Merge branch 'master' into chain-weight-consensus
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
#	src/main/resources/blockchain.json
#	src/test/resources/test-chain-v2-founder-rewards.json
#	src/test/resources/test-chain-v2-leftover-reward.json
#	src/test/resources/test-chain-v2-minting.json
#	src/test/resources/test-chain-v2-qora-holder-extremes.json
#	src/test/resources/test-chain-v2-qora-holder.json
#	src/test/resources/test-chain-v2-reward-scaling.json
#	src/test/resources/test-chain-v2.json
2021-05-02 18:18:20 +01:00
CalDescent
fac02dbc7d Fixed bug in maxHeight parameter passed to Block.calcChainWeight()
Like the others, this one is only relevant after switching to same-length chain weight comparisons.
2021-05-02 15:56:13 +01:00
CalDescent
9ebcd55ff5 Fixed calculation error in existing chain weight code, which would have caused the last block to be missed out of the comparison after switching to same-length chain comparisons. 2021-05-01 13:34:13 +01:00
CalDescent
50244c1c40 Fixed bug which would cause other peers to not be compared against each other, if we had no blocks ourselves.
Again, this wouldn't have affected anything in 1.5.0 or before, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 13:32:16 +01:00
CalDescent
b4395fdad1 Fixed bug which could cause minChainLength to report a higher value.
This wouldn't have affected anything in 1.5.0, but it will become more significant if we switch to same-length chain weight comparisons.
2021-05-01 10:57:24 +01:00
CalDescent
1da8994be7 Log the block timestamp, minter level, online accounts, key distance, and weight, when orphaning or processing.
This gives an insight into the contents of each chain when doing a re-org. To enable this logging, add the following to log4j2.properties:

logger.block.name = org.qortal.block.Block
logger.block.level = debug
2021-05-01 10:24:50 +01:00
QuickMythril
55ff1e2bb1 updated and tested BTC electrum servers (#36)
* updated electrum servers

mainnet list: https://1209k.com/bitcoin-eye/ele.php?chain=btc
testnet list: https://1209k.com/bitcoin-eye/ele.php?chain=tbtc

* removed servers

tested each mainnet server individually and removed those that did not respond
2021-05-01 09:18:46 +01:00
CalDescent
5fd8528c49 Small refactor for code readability, and added some defensiveness to avoid possible NPEs. 2021-04-29 09:04:59 +01:00
CalDescent
26d8ed783a Same as commit c0c5bf1, but for blocks as well as block summaries. 2021-04-29 08:55:16 +01:00
CalDescent
c0c5bf1591 Apply blocks in syncToPeerChain() if the latest received block is newer than our latest, and we started from an out of date chain.
This solves a common problem that is mostly seen when starting a node that has been switched off for some time, or when starting from a bootstrap. In these cases, it can be difficult get synced to the latest if you are starting from a small fork. This is because it required that the node was brought up to date via a single peer, and there wasn't much room for error if it failed to retrieve a block a couple of times. This generally caused the blocks to be thrown away and it would try the same process over and over.

The solution is to apply new blocks if the most recently received block is newer than our current latest block. This gets the node back on to the main fork where it can then sync using the regular applyNewBlocks() method.
2021-04-28 22:03:13 +01:00
CalDescent
c17a481b74 Bump version to 1.5.0 2021-04-26 18:34:01 +01:00
CalDescent
a9a0e69ec0 Set go-live block height for share bin fix: block 399000 2021-04-26 17:19:39 +01:00
CalDescent
ea1fed2fd3 Merge branch 'block-reward-distribution-fix' 2021-04-26 17:16:14 +01:00
CalDescent
45efe7cd56 Slight reordering of vars. 2021-04-10 18:24:33 +01:00
CalDescent
78cac7f0e6 Updated usage info to reflect the fact that the "count" parameter is optional.
Usage:

block-timings.sh <startheight> [count] [target] [deviation] [power]
2021-04-10 18:12:09 +01:00
CalDescent
a1a1b8e94a Added tools/block-timings-sh which can be used to test out new block timings (specified in blockchain.json).
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.

Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]

startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
2021-04-10 17:57:28 +01:00
CalDescent
641a658059 Added /blocks/byheight/{height}/mintinginfo API, which returns info on the minter level, key distance, and block timings. 2021-04-10 17:49:04 +01:00
CalDescent
16453ed602 Added unit tests for level 3+4, 5+6, 7+8, and 9+10 rewards.
These are simpler than the level 1+2 tests; they only test that the rewards are correct for each level post-shareBinFix. I don't think we need multiple instances of the pre-shareBinFix or block orphaning tests. There are a few subtle differences between each test, such as the online status of Bob, in order the make the tests slightly more comprehensive.
2021-03-17 08:50:53 +00:00
CalDescent
fde68dc598 Added unit test to test level 1 and 2 rewards.
1. Assign 3 minters (one founder, one level 1, one level 2)
2. Mint a block after the shareBinFix, ensuring that level 1 and 2 are being rewarded evenly from the same share bin.
3. Orphan the block and ensure the rewards are reversed.
4. Orphan two more blocks, each time checking that the balances are being reduced in accordance with the pre-shareBinFix mapping.
2021-03-16 09:11:49 +00:00
CalDescent
847e81e95c Fixed a mapping issue in Block->getShareBins(), to take effect at some future (undecided) height.
Post trigger, account levels will map correctly to share bins, subtracting 1 to account for the 0th element of the shareBinsByLevel array.
Pre-trigger, the legacy mapping will remain in effect.
2021-03-12 19:48:49 +00:00
catbref
1e6e5e66da Fix trailing comma on blockchain.json! 2021-02-06 12:09:24 +00:00
catbref
9b0e88ca87 Only compare same number of blocks when comparing peer chains 2021-02-06 11:40:29 +00:00
catbref
3acc0babb7 More chain-weight tests 2021-02-06 11:19:39 +00:00
catbref
a9c7142d7b Speed up AT states reshape 2020-11-10 16:46:07 +00:00
catbref
7a40c3526f Bugfixes and tests for SLEEP_UNTIL_MESSAGE 2020-11-10 15:44:47 +00:00
catbref
3253d9d3fb WIP: initial implementation of AT sleep-until-message (untested) 2020-11-10 15:44:47 +00:00
156 changed files with 11630 additions and 5622 deletions

BIN
.DS_Store vendored

Binary file not shown.

33
.github/workflows/pr-testing.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: PR testing
on:
pull_request:
branches: [ master ]
jobs:
mavenTesting:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Cache local Maven repository
uses: actions/cache@v2
with:
path: ~/.m2/repository
key: ${{ runner.os }}-maven-${{ hashFiles('**/pom.xml') }}
restore-keys: |
${{ runner.os }}-maven-
- name: Set up the Java JDK
uses: actions/setup-java@v2
with:
java-version: '11'
distribution: 'adopt'
- name: Run all tests
run: |
mvn -B clean test -DskipTests=false --file pom.xml
if [ -f "target/site/jacoco/index.html" ]; then echo "Total coverage: $(cat target/site/jacoco/index.html | grep -o 'Total[^%]*%' | grep -o '[0-9]*%')"; fi
- name: Log coverage percentage
run: |
if [ ! -f "target/site/jacoco/index.html" ]; then echo "No coverage information available"; fi
if [ -f "target/site/jacoco/index.html" ]; then echo "Total coverage: $(cat target/site/jacoco/index.html | grep -o 'Total[^%]*%' | grep -o '[0-9]*%')"; fi

13
.gitignore vendored
View File

@@ -1,6 +1,8 @@
/db*
/lists/
/bin/
/target/
/qortal-backup/
/log.txt.*
/arbitrary*
/Qortal-BTC*
@@ -14,8 +16,13 @@
/settings.json
/testnet*
/settings*.json
/testchain.json
/run-testnet.sh
/testchain*.json
/run-testnet*.sh
/.idea
/qortal.iml
*.DS_Store
.DS_Store
/src/main/resources/resources
/*.jar
/run.pid
/run.log
/WindowsInstaller/Install Files/qortal.jar

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ configured paths, or create a dummy `D:` drive with the expected layout.
Typical build procedure:
* Overwrite the `qortal.jar` file in `Install-Files\`
* Place the `qortal.jar` file in `Install-Files\`
* Open AdvancedInstaller with qortal.aip file
* If releasing a new version, change version number in:
+ "Product Information" side menu

BIN
WindowsInstaller/qortal.ico Executable file → Normal file

Binary file not shown.

Before

Width:  |  Height:  |  Size: 250 KiB

After

Width:  |  Height:  |  Size: 42 KiB

View File

@@ -3,12 +3,12 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>1.4.6</version>
<version>1.6.0</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
<altcoinj.version>bf9fb80</altcoinj.version>
<bitcoinj.version>0.15.6</bitcoinj.version>
<bitcoinj.version>0.15.10</bitcoinj.version>
<bouncycastle.version>1.64</bouncycastle.version>
<build.timestamp>${maven.build.timestamp}</build.timestamp>
<ciyam-at.version>1.3.8</ciyam-at.version>
@@ -439,6 +439,11 @@
<artifactId>json-simple</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>org.json</groupId>
<artifactId>json</artifactId>
<version>20210307</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-text</artifactId>

BIN
src/.DS_Store vendored

Binary file not shown.

BIN
src/main/.DS_Store vendored

Binary file not shown.

View File

@@ -129,7 +129,10 @@ public enum ApiError {
// Foreign blockchain
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE(1201, 500),
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE(1202, 402),
FOREIGN_BLOCKCHAIN_TOO_SOON(1203, 408);
FOREIGN_BLOCKCHAIN_TOO_SOON(1203, 408),
// Trade portal
ORDER_SIZE_TOO_SMALL(1300, 402);
private static final Map<Integer, ApiError> map = stream(ApiError.values()).collect(toMap(apiError -> apiError.code, apiError -> apiError));
@@ -157,4 +160,4 @@ public enum ApiError {
return this.status;
}
}
}

View File

@@ -2,7 +2,7 @@ package org.qortal.api;
import javax.xml.bind.annotation.adapters.XmlAdapter;
import org.bitcoinj.core.Base58;
import org.qortal.utils.Base58;
public class Base58TypeAdapter extends XmlAdapter<String, byte[]> {

View File

@@ -0,0 +1,18 @@
package org.qortal.api.model;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class AddressListRequest {
@Schema(description = "A list of addresses")
public List<String> addresses;
public AddressListRequest() {
}
}

View File

@@ -0,0 +1,23 @@
package org.qortal.api.model;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.math.BigDecimal;
import java.math.BigInteger;
@XmlAccessorType(XmlAccessType.FIELD)
public class BlockMintingInfo {
public byte[] minterPublicKey;
public int minterLevel;
public int onlineAccountsCount;
public BigDecimal maxDistance;
public BigInteger keyDistance;
public double keyDistanceRatio;
public long timestamp;
public long timeDelta;
public BlockMintingInfo() {
}
}

View File

@@ -1,61 +1,74 @@
package org.qortal.api.model;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import io.swagger.v3.oas.annotations.media.Schema;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.network.Handshake;
import org.qortal.network.Peer;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.UUID;
import java.util.concurrent.TimeUnit;
@XmlAccessorType(XmlAccessType.FIELD)
public class ConnectedPeer {
public enum Direction {
INBOUND,
OUTBOUND;
}
public Direction direction;
public Handshake handshakeStatus;
public Long lastPing;
public Long connectedWhen;
public Long peersConnectedWhen;
public enum Direction {
INBOUND,
OUTBOUND;
}
public String address;
public String version;
public Direction direction;
public Handshake handshakeStatus;
public Long lastPing;
public Long connectedWhen;
public Long peersConnectedWhen;
public String nodeId;
public String address;
public String version;
public Integer lastHeight;
@Schema(example = "base58")
public byte[] lastBlockSignature;
public Long lastBlockTimestamp;
public String nodeId;
protected ConnectedPeer() {
}
public Integer lastHeight;
@Schema(example = "base58")
public byte[] lastBlockSignature;
public Long lastBlockTimestamp;
public UUID connectionId;
public String age;
public ConnectedPeer(Peer peer) {
this.direction = peer.isOutbound() ? Direction.OUTBOUND : Direction.INBOUND;
this.handshakeStatus = peer.getHandshakeStatus();
this.lastPing = peer.getLastPing();
protected ConnectedPeer() {
}
PeerData peerData = peer.getPeerData();
this.connectedWhen = peer.getConnectionTimestamp();
this.peersConnectedWhen = peer.getPeersConnectionTimestamp();
public ConnectedPeer(Peer peer) {
this.direction = peer.isOutbound() ? Direction.OUTBOUND : Direction.INBOUND;
this.handshakeStatus = peer.getHandshakeStatus();
this.lastPing = peer.getLastPing();
this.address = peerData.getAddress().toString();
PeerData peerData = peer.getPeerData();
this.connectedWhen = peer.getConnectionTimestamp();
this.peersConnectedWhen = peer.getPeersConnectionTimestamp();
this.version = peer.getPeersVersionString();
this.nodeId = peer.getPeersNodeId();
this.address = peerData.getAddress().toString();
PeerChainTipData peerChainTipData = peer.getChainTipData();
if (peerChainTipData != null) {
this.lastHeight = peerChainTipData.getLastHeight();
this.lastBlockSignature = peerChainTipData.getLastBlockSignature();
this.lastBlockTimestamp = peerChainTipData.getLastBlockTimestamp();
}
}
this.version = peer.getPeersVersionString();
this.nodeId = peer.getPeersNodeId();
this.connectionId = peer.getPeerConnectionId();
if (peer.getConnectionEstablishedTime() > 0) {
long age = (System.currentTimeMillis() - peer.getConnectionEstablishedTime());
long minutes = TimeUnit.MILLISECONDS.toMinutes(age);
long seconds = TimeUnit.MILLISECONDS.toSeconds(age) - TimeUnit.MINUTES.toSeconds(minutes);
this.age = String.format("%dm %ds", minutes, seconds);
} else {
this.age = "connecting...";
}
PeerChainTipData peerChainTipData = peer.getChainTipData();
if (peerChainTipData != null) {
this.lastHeight = peerChainTipData.getLastHeight();
this.lastBlockSignature = peerChainTipData.getLastBlockSignature();
this.lastBlockTimestamp = peerChainTipData.getLastBlockTimestamp();
}
}
}

View File

@@ -0,0 +1,29 @@
package org.qortal.api.model;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class CrossChainDualSecretRequest {
@Schema(description = "Public key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPublicKey;
@Schema(description = "Qortal AT address")
public String atAddress;
@Schema(description = "secret-A (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secretA;
@Schema(description = "secret-B (32 bytes)", example = "EN2Bgx3BcEMtxFCewmCVSMkfZjVKYhx3KEXC5A21KBGx")
public byte[] secretB;
@Schema(description = "Qortal address for receiving QORT from AT")
public String receivingAddress;
public CrossChainDualSecretRequest() {
}
}

View File

@@ -8,17 +8,14 @@ import io.swagger.v3.oas.annotations.media.Schema;
@XmlAccessorType(XmlAccessType.FIELD)
public class CrossChainSecretRequest {
@Schema(description = "Public key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPublicKey;
@Schema(description = "Private key to match AT's trade 'partner'", example = "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry")
public byte[] partnerPrivateKey;
@Schema(description = "Qortal AT address")
public String atAddress;
@Schema(description = "secret-A (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secretA;
@Schema(description = "secret-B (32 bytes)", example = "EN2Bgx3BcEMtxFCewmCVSMkfZjVKYhx3KEXC5A21KBGx")
public byte[] secretB;
@Schema(description = "Secret (32 bytes)", example = "FHMzten4he9jZ4HGb4297Utj6F5g2w7serjq2EnAg2s1")
public byte[] secret;
@Schema(description = "Qortal address for receiving QORT from AT")
public String receivingAddress;

View File

@@ -25,6 +25,12 @@ public class CrossChainTradeSummary {
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long foreignAmount;
private String atAddress;
private String sellerAddress;
private String buyerReceivingAddress;
protected CrossChainTradeSummary() {
/* For JAXB */
}
@@ -34,6 +40,9 @@ public class CrossChainTradeSummary {
this.qortAmount = crossChainTradeData.qortAmount;
this.foreignAmount = crossChainTradeData.expectedForeignAmount;
this.btcAmount = this.foreignAmount;
this.sellerAddress = crossChainTradeData.qortalCreator;
this.buyerReceivingAddress = crossChainTradeData.qortalPartnerReceivingAddress;
this.atAddress = crossChainTradeData.qortalAtAddress;
}
public long getTradeTimestamp() {
@@ -48,7 +57,11 @@ public class CrossChainTradeSummary {
return this.btcAmount;
}
public long getForeignAmount() {
return this.foreignAmount;
}
public long getForeignAmount() { return this.foreignAmount; }
public String getAtAddress() { return this.atAddress; }
public String getSellerAddress() { return this.sellerAddress; }
public String getBuyerReceivingAddressAddress() { return this.buyerReceivingAddress; }
}

View File

@@ -0,0 +1,29 @@
package org.qortal.api.model.crosschain;
import io.swagger.v3.oas.annotations.media.Schema;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
@XmlAccessorType(XmlAccessType.FIELD)
public class DogecoinSendRequest {
@Schema(description = "Dogecoin BIP32 extended private key", example = "tprv___________________________________________________________________________________________________________")
public String xprv58;
@Schema(description = "Recipient's Dogecoin address ('legacy' P2PKH only)", example = "DoGecoinEaterAddressDontSendhLfzKD")
public String receivingAddress;
@Schema(description = "Amount of DOGE to send", type = "number")
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public long dogecoinAmount;
@Schema(description = "Transaction fee per byte (optional). Default is 0.00000100 DOGE (100 sats) per byte", example = "0.00000100", type = "number")
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public Long feePerByte;
public DogecoinSendRequest() {
}
}

View File

@@ -542,19 +542,8 @@ public class AdminResource {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lockInterruptibly();
try {
repository.exportNodeLocalData(true);
return "true";
} finally {
blockchainLock.unlock();
}
} catch (InterruptedException e) {
// We couldn't lock blockchain to perform export
return "false";
repository.exportNodeLocalData();
return "true";
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -564,13 +553,13 @@ public class AdminResource {
@Path("/repository/data")
@Operation(
summary = "Import data into repository.",
description = "Imports data from file on local machine. Filename is forced to 'import.script' if apiKey is not set.",
description = "Imports data from file on local machine. Filename is forced to 'qortal-backup/TradeBotStates.json' if apiKey is not set.",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string", example = "MintingAccounts.script"
type = "string", example = "qortal-backup/TradeBotStates.json"
)
)
),
@@ -588,7 +577,7 @@ public class AdminResource {
// Hard-coded because it's too dangerous to allow user-supplied filenames in weaker security contexts
if (Settings.getInstance().getApiKey() == null)
filename = "import.script";
filename = "qortal-backup/TradeBotStates.json";
try (final Repository repository = RepositoryManager.getRepository()) {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();

View File

@@ -1,5 +1,6 @@
package org.qortal.api.resource;
import com.google.common.primitives.Ints;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.Parameter;
import io.swagger.v3.oas.annotations.media.ArraySchema;
@@ -8,6 +9,11 @@ import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.List;
@@ -20,10 +26,13 @@ import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.qortal.account.Account;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockMintingInfo;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
@@ -32,6 +41,8 @@ import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Base58;
@Path("/blocks")
@@ -80,6 +91,48 @@ public class BlocksResource {
}
}
@GET
@Path("/signature/{signature}/data")
@Operation(
summary = "Fetch serialized, base58 encoded block data using base58 signature",
description = "Returns serialized data for the block that matches the given signature",
responses = {
@ApiResponse(
description = "the block data",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string"))
)
}
)
@ApiErrors({
ApiError.INVALID_SIGNATURE, ApiError.BLOCK_UNKNOWN, ApiError.INVALID_DATA, ApiError.REPOSITORY_ISSUE
})
public String getSerializedBlockData(@PathParam("signature") String signature58) {
// Decode signature
byte[] signature;
try {
signature = Base58.decode(signature58);
} catch (NumberFormatException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_SIGNATURE, e);
}
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
} catch (TransformationException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA, e);
} catch (DataException | IOException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@GET
@Path("/signature/{signature}/transactions")
@Operation(
@@ -328,6 +381,59 @@ public class BlocksResource {
}
}
@GET
@Path("/byheight/{height}/mintinginfo")
@Operation(
summary = "Fetch block minter info using block height",
description = "Returns the minter info for the block with given height",
responses = {
@ApiResponse(
description = "the block",
content = @Content(
schema = @Schema(
implementation = BlockData.class
)
)
)
}
)
@ApiErrors({
ApiError.BLOCK_UNKNOWN, ApiError.REPOSITORY_ISSUE
})
public BlockMintingInfo getBlockMintingInfoByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
Block block = new Block(repository, blockData);
BlockData parentBlockData = repository.getBlockRepository().fromSignature(blockData.getReference());
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
if (minterLevel == 0)
// This may be unavailable when requesting a trimmed block
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
BigInteger distance = block.calcKeyDistance(parentBlockData.getHeight(), parentBlockData.getSignature(), blockData.getMinterPublicKey(), minterLevel);
double ratio = new BigDecimal(distance).divide(new BigDecimal(block.MAX_DISTANCE), 40, RoundingMode.DOWN).doubleValue();
long timestamp = block.calcTimestamp(parentBlockData, blockData.getMinterPublicKey(), minterLevel);
long timeDelta = timestamp - parentBlockData.getTimestamp();
BlockMintingInfo blockMintingInfo = new BlockMintingInfo();
blockMintingInfo.minterPublicKey = blockData.getMinterPublicKey();
blockMintingInfo.minterLevel = minterLevel;
blockMintingInfo.onlineAccountsCount = blockData.getOnlineAccountsCount();
blockMintingInfo.maxDistance = new BigDecimal(block.MAX_DISTANCE);
blockMintingInfo.keyDistance = distance;
blockMintingInfo.keyDistanceRatio = ratio;
blockMintingInfo.timestamp = timestamp;
blockMintingInfo.timeDelta = timeDelta;
return blockMintingInfo;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@GET
@Path("/timestamp/{timestamp}")
@Operation(

View File

@@ -22,7 +22,7 @@ import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainBuildRequest;
import org.qortal.api.model.CrossChainSecretRequest;
import org.qortal.api.model.CrossChainDualSecretRequest;
import org.qortal.api.model.CrossChainTradeRequest;
import org.qortal.asset.Asset;
import org.qortal.crosschain.BitcoinACCTv1;
@@ -242,7 +242,7 @@ public class CrossChainBitcoinACCTv1Resource {
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = CrossChainSecretRequest.class
implementation = CrossChainDualSecretRequest.class
)
)
),
@@ -257,7 +257,7 @@ public class CrossChainBitcoinACCTv1Resource {
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_DATA, ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public String buildRedeemMessage(CrossChainSecretRequest secretRequest) {
public String buildRedeemMessage(CrossChainDualSecretRequest secretRequest) {
Security.checkApiCallAllowed(request);
byte[] partnerPublicKey = secretRequest.partnerPublicKey;

View File

@@ -23,8 +23,8 @@ import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.BitcoinSendRequest;
import org.qortal.crosschain.Bitcoin;
import org.qortal.crosschain.BitcoinyTransaction;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.SimpleTransaction;
@Path("/crosschain/btc")
@Tag(name = "Cross-Chain (Bitcoin)")
@@ -89,12 +89,12 @@ public class CrossChainBitcoinResource {
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = BitcoinyTransaction.class ) ) )
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public List<BitcoinyTransaction> getBitcoinWalletTransactions(String key58) {
public List<SimpleTransaction> getBitcoinWalletTransactions(String key58) {
Security.checkApiCallAllowed(request);
Bitcoin bitcoin = Bitcoin.getInstance();

View File

@@ -0,0 +1,140 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainSecretRequest;
import org.qortal.crosschain.AcctMode;
import org.qortal.crosschain.DogecoinACCTv1;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.Transformer;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import java.util.Arrays;
@Path("/crosschain/DogecoinACCTv1")
@Tag(name = "Cross-Chain (DogecoinACCTv1)")
public class CrossChainDogecoinACCTv1Resource {
@Context
HttpServletRequest request;
@POST
@Path("/redeemmessage")
@Operation(
summary = "Signs and broadcasts a 'redeem' MESSAGE transaction that sends secrets to AT, releasing funds to partner",
description = "Specify address of cross-chain AT that needs to be messaged, Alice's trade private key, the 32-byte secret,<br>"
+ "and an address for receiving QORT from AT. All of these can be found in Alice's trade bot data.<br>"
+ "AT needs to be in 'trade' mode. Messages sent to an AT in any other mode will be ignored, but still cost fees to send!<br>"
+ "You need to use the private key that the AT considers the trade 'partner' otherwise the MESSAGE transaction will be invalid.",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = CrossChainSecretRequest.class
)
)
),
responses = {
@ApiResponse(
content = @Content(
schema = @Schema(
type = "string"
)
)
)
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_DATA, ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public boolean buildRedeemMessage(CrossChainSecretRequest secretRequest) {
Security.checkApiCallAllowed(request);
byte[] partnerPrivateKey = secretRequest.partnerPrivateKey;
if (partnerPrivateKey == null || partnerPrivateKey.length != Transformer.PRIVATE_KEY_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
if (secretRequest.atAddress == null || !Crypto.isValidAtAddress(secretRequest.atAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (secretRequest.secret == null || secretRequest.secret.length != DogecoinACCTv1.SECRET_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
if (secretRequest.receivingAddress == null || !Crypto.isValidAddress(secretRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = fetchAtDataWithChecking(repository, secretRequest.atAddress);
CrossChainTradeData crossChainTradeData = DogecoinACCTv1.getInstance().populateTradeData(repository, atData);
if (crossChainTradeData.mode != AcctMode.TRADING)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] partnerPublicKey = new PrivateKeyAccount(null, partnerPrivateKey).getPublicKey();
String partnerAddress = Crypto.toAddress(partnerPublicKey);
// MESSAGE must come from address that AT considers trade partner
if (!crossChainTradeData.qortalPartnerAddress.equals(partnerAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
// Good to make MESSAGE
byte[] messageData = DogecoinACCTv1.buildRedeemMessage(secretRequest.secret, secretRequest.receivingAddress);
PrivateKeyAccount sender = new PrivateKeyAccount(repository, partnerPrivateKey);
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, secretRequest.atAddress, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.TRANSACTION_INVALID);
return true;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
private ATData fetchAtDataWithChecking(Repository repository, String atAddress) throws DataException {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Must be correct AT - check functionality using code hash
if (!Arrays.equals(atData.getCodeHash(), DogecoinACCTv1.CODE_BYTES_HASH))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// No point sending message to AT that's finished
if (atData.getIsFinished())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
return atData;
}
}

View File

@@ -0,0 +1,165 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.ArraySchema;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.bitcoinj.core.Transaction;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.DogecoinSendRequest;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.Dogecoin;
import org.qortal.crosschain.SimpleTransaction;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import java.util.List;
@Path("/crosschain/doge")
@Tag(name = "Cross-Chain (Dogecoin)")
public class CrossChainDogecoinResource {
@Context
HttpServletRequest request;
@POST
@Path("/walletbalance")
@Operation(
summary = "Returns DOGE balance for hierarchical, deterministic BIP32 wallet",
description = "Supply BIP32 'm' private/public key in base58, starting with 'xprv'/'xpub' for mainnet, 'tprv'/'tpub' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string",
description = "BIP32 'm' private/public key in base58",
example = "tpubD6NzVbkrYhZ4XTPc4btCZ6SMgn8CxmWkj6VBVZ1tfcJfMq4UwAjZbG8U74gGSypL9XBYk2R2BLbDBe8pcEyBKM1edsGQEPKXNbEskZozeZc"
)
)
),
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string", description = "balance (satoshis)"))
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public String getDogecoinWalletBalance(String key58) {
Security.checkApiCallAllowed(request);
Dogecoin dogecoin = Dogecoin.getInstance();
if (!dogecoin.isValidDeterministicKey(key58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
Long balance = dogecoin.getWalletBalance(key58);
if (balance == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
return balance.toString();
}
@POST
@Path("/wallettransactions")
@Operation(
summary = "Returns transactions for hierarchical, deterministic BIP32 wallet",
description = "Supply BIP32 'm' private/public key in base58, starting with 'xprv'/'xpub' for mainnet, 'tprv'/'tpub' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string",
description = "BIP32 'm' private/public key in base58",
example = "tpubD6NzVbkrYhZ4XTPc4btCZ6SMgn8CxmWkj6VBVZ1tfcJfMq4UwAjZbG8U74gGSypL9XBYk2R2BLbDBe8pcEyBKM1edsGQEPKXNbEskZozeZc"
)
)
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public List<SimpleTransaction> getDogecoinWalletTransactions(String key58) {
Security.checkApiCallAllowed(request);
Dogecoin dogecoin = Dogecoin.getInstance();
if (!dogecoin.isValidDeterministicKey(key58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
try {
return dogecoin.getWalletTransactions(key58);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
}
}
@POST
@Path("/send")
@Operation(
summary = "Sends DOGE from hierarchical, deterministic BIP32 wallet to specific address",
description = "Currently only supports 'legacy' P2PKH Dogecoin addresses. Supply BIP32 'm' private key in base58, starting with 'xprv' for mainnet, 'tprv' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = DogecoinSendRequest.class
)
)
),
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string", description = "transaction hash"))
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public String sendBitcoin(DogecoinSendRequest dogecoinSendRequest) {
Security.checkApiCallAllowed(request);
if (dogecoinSendRequest.dogecoinAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
if (dogecoinSendRequest.feePerByte != null && dogecoinSendRequest.feePerByte <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
Dogecoin dogecoin = Dogecoin.getInstance();
if (!dogecoin.isValidAddress(dogecoinSendRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (!dogecoin.isValidDeterministicKey(dogecoinSendRequest.xprv58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
Transaction spendTransaction = dogecoin.buildSpend(dogecoinSendRequest.xprv58,
dogecoinSendRequest.receivingAddress,
dogecoinSendRequest.dogecoinAmount,
dogecoinSendRequest.feePerByte);
if (spendTransaction == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE);
try {
dogecoin.broadcastTransaction(spendTransaction);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
}
return spendTransaction.getTxId().toString();
}
}

View File

@@ -11,29 +11,35 @@ import java.util.List;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.GET;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.bitcoinj.core.TransactionOutput;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bitcoinj.core.*;
import org.bitcoinj.script.Script;
import org.qortal.api.*;
import org.qortal.api.model.CrossChainBitcoinyHTLCStatus;
import org.qortal.crosschain.Bitcoiny;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crosschain.BitcoinyHTLC;
import org.qortal.crosschain.*;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import com.google.common.hash.HashCode;
@Path("/crosschain/htlc")
@Tag(name = "Cross-Chain (Hash time-locked contracts)")
public class CrossChainHtlcResource {
private static final Logger LOGGER = LogManager.getLogger(CrossChainHtlcResource.class);
@Context
HttpServletRequest request;
@@ -41,7 +47,7 @@ public class CrossChainHtlcResource {
@Path("/address/{blockchain}/{refundPKH}/{locktime}/{redeemPKH}/{hashOfSecret}")
@Operation(
summary = "Returns HTLC address based on trade info",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (hex). Locktime is seconds since epoch.",
description = "Public key hashes (PKH) and hash of secret should be 20 bytes (base58 encoded). Locktime is seconds since epoch.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string"))
@@ -50,21 +56,21 @@ public class CrossChainHtlcResource {
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_CRITERIA})
public String deriveHtlcAddress(@PathParam("blockchain") String blockchainName,
@PathParam("refundPKH") String refundHex,
@PathParam("refundPKH") String refundPKH,
@PathParam("locktime") int lockTime,
@PathParam("redeemPKH") String redeemHex,
@PathParam("hashOfSecret") String hashOfSecretHex) {
@PathParam("redeemPKH") String redeemPKH,
@PathParam("hashOfSecret") String hashOfSecret) {
SupportedBlockchain blockchain = SupportedBlockchain.valueOf(blockchainName);
if (blockchain == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] refunderPubKeyHash;
byte[] redeemerPubKeyHash;
byte[] hashOfSecret;
byte[] decodedHashOfSecret;
try {
refunderPubKeyHash = HashCode.fromString(refundHex).asBytes();
redeemerPubKeyHash = HashCode.fromString(redeemHex).asBytes();
refunderPubKeyHash = Base58.decode(refundPKH);
redeemerPubKeyHash = Base58.decode(redeemPKH);
if (refunderPubKeyHash.length != 20 || redeemerPubKeyHash.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
@@ -73,14 +79,14 @@ public class CrossChainHtlcResource {
}
try {
hashOfSecret = HashCode.fromString(hashOfSecretHex).asBytes();
if (hashOfSecret.length != 20)
decodedHashOfSecret = Base58.decode(hashOfSecret);
if (decodedHashOfSecret.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
} catch (IllegalArgumentException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, hashOfSecret);
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, decodedHashOfSecret);
Bitcoiny bitcoiny = (Bitcoiny) blockchain.getInstance();
@@ -91,7 +97,7 @@ public class CrossChainHtlcResource {
@Path("/status/{blockchain}/{refundPKH}/{locktime}/{redeemPKH}/{hashOfSecret}")
@Operation(
summary = "Checks HTLC status",
description = "Blockchain can be BITCOIN or LITECOIN. Public key hashes (PKH) and hash of secret should be 20 bytes (hex). Locktime is seconds since epoch.",
description = "Public key hashes (PKH) and hash of secret should be 20 bytes (base58 encoded). Locktime is seconds since epoch.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.APPLICATION_JSON, schema = @Schema(implementation = CrossChainBitcoinyHTLCStatus.class))
@@ -100,10 +106,10 @@ public class CrossChainHtlcResource {
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public CrossChainBitcoinyHTLCStatus checkHtlcStatus(@PathParam("blockchain") String blockchainName,
@PathParam("refundPKH") String refundHex,
@PathParam("refundPKH") String refundPKH,
@PathParam("locktime") int lockTime,
@PathParam("redeemPKH") String redeemHex,
@PathParam("hashOfSecret") String hashOfSecretHex) {
@PathParam("redeemPKH") String redeemPKH,
@PathParam("hashOfSecret") String hashOfSecret) {
Security.checkApiCallAllowed(request);
SupportedBlockchain blockchain = SupportedBlockchain.valueOf(blockchainName);
@@ -112,11 +118,11 @@ public class CrossChainHtlcResource {
byte[] refunderPubKeyHash;
byte[] redeemerPubKeyHash;
byte[] hashOfSecret;
byte[] decodedHashOfSecret;
try {
refunderPubKeyHash = HashCode.fromString(refundHex).asBytes();
redeemerPubKeyHash = HashCode.fromString(redeemHex).asBytes();
refunderPubKeyHash = Base58.decode(refundPKH);
redeemerPubKeyHash = Base58.decode(redeemPKH);
if (refunderPubKeyHash.length != 20 || redeemerPubKeyHash.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
@@ -125,14 +131,14 @@ public class CrossChainHtlcResource {
}
try {
hashOfSecret = HashCode.fromString(hashOfSecretHex).asBytes();
if (hashOfSecret.length != 20)
decodedHashOfSecret = Base58.decode(hashOfSecret);
if (decodedHashOfSecret.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
} catch (IllegalArgumentException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, hashOfSecret);
byte[] redeemScript = BitcoinyHTLC.buildScript(refunderPubKeyHash, lockTime, redeemerPubKeyHash, decodedHashOfSecret);
Bitcoiny bitcoiny = (Bitcoiny) blockchain.getInstance();
@@ -168,8 +174,480 @@ public class CrossChainHtlcResource {
}
}
// TODO: refund
@POST
@Path("/redeem/{ataddress}")
@Operation(
summary = "Redeems HTLC associated with supplied AT",
description = "To be used by a QORT seller (Bob) who needs to redeem LTC/DOGE/etc proceeds that are stuck in a P2SH.<br>" +
"This requires Bob's trade bot data to be present in the database for this AT.<br>" +
"It will fail if the buyer has yet to redeem the QORT held in the AT.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean redeemHtlc(@PathParam("ataddress") String atAddress) {
Security.checkApiCallAllowed(request);
// TODO: redeem
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
}
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Attempt to find secret from the buyer's message to AT
byte[] decodedSecret = acct.findSecretA(repository, crossChainTradeData);
if (decodedSecret == null) {
LOGGER.info(() -> String.format("Unable to find secret-A from redeem message to AT %s", atAddress));
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
// Search for the tradePrivateKey in the tradebot data
byte[] decodedPrivateKey = null;
if (tradeBotData != null)
decodedPrivateKey = tradeBotData.getTradePrivateKey();
// Search for the foreign blockchain receiving address in the tradebot data
byte[] foreignBlockchainReceivingAccountInfo = null;
if (tradeBotData != null)
// Use receiving address PKH from tradebot data
foreignBlockchainReceivingAccountInfo = tradeBotData.getReceivingAccountInfo();
return this.doRedeemHtlc(atAddress, decodedPrivateKey, decodedSecret, foreignBlockchainReceivingAccountInfo);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@POST
@Path("/redeemAll")
@Operation(
summary = "Redeems HTLC for all applicable ATs in tradebot data",
description = "To be used by a QORT seller (Bob) who needs to redeem LTC/DOGE/etc proceeds that are stuck in P2SH transactions.<br>" +
"This requires Bob's trade bot data to be present in the database for any ATs that need redeeming.<br>" +
"Returns true if at least one trade is redeemed. More detail is available in the log.txt.* file.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean redeemAllHtlc() {
Security.checkApiCallAllowed(request);
boolean success = false;
try (final Repository repository = RepositoryManager.getRepository()) {
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
for (TradeBotData tradeBotData : allTradeBotData) {
String atAddress = tradeBotData.getAtAddress();
if (atAddress == null) {
LOGGER.info("Missing AT address in tradebot data", atAddress);
continue;
}
String tradeState = tradeBotData.getState();
if (tradeState == null) {
LOGGER.info("Missing trade state for AT {}", atAddress);
continue;
}
if (tradeState.startsWith("ALICE")) {
LOGGER.info("AT {} isn't redeemable because it is a buy order", atAddress);
continue;
}
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null) {
LOGGER.info("Couldn't find AT with address {}", atAddress);
continue;
}
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null) {
continue;
}
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null) {
LOGGER.info("Couldn't find crosschain trade data for AT {}", atAddress);
continue;
}
// Attempt to find secret from the buyer's message to AT
byte[] decodedSecret = acct.findSecretA(repository, crossChainTradeData);
if (decodedSecret == null) {
LOGGER.info("Unable to find secret-A from redeem message to AT {}", atAddress);
continue;
}
// Search for the tradePrivateKey in the tradebot data
byte[] decodedPrivateKey = tradeBotData.getTradePrivateKey();
// Search for the foreign blockchain receiving address PKH in the tradebot data
byte[] foreignBlockchainReceivingAccountInfo = tradeBotData.getReceivingAccountInfo();
try {
LOGGER.info("Attempting to redeem P2SH balance associated with AT {}...", atAddress);
boolean redeemed = this.doRedeemHtlc(atAddress, decodedPrivateKey, decodedSecret, foreignBlockchainReceivingAccountInfo);
if (redeemed) {
LOGGER.info("Redeemed P2SH balance associated with AT {}", atAddress);
success = true;
}
else {
LOGGER.info("Couldn't redeem P2SH balance associated with AT {}. Already redeemed?", atAddress);
}
} catch (ApiException e) {
LOGGER.info("Couldn't redeem P2SH balance associated with AT {}. Missing data?", atAddress);
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
return success;
}
private boolean doRedeemHtlc(String atAddress, byte[] decodedTradePrivateKey, byte[] decodedSecret,
byte[] foreignBlockchainReceivingAccountInfo) {
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate trade private key
if (decodedTradePrivateKey == null || decodedTradePrivateKey.length != 32)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate secret
if (decodedSecret == null || decodedSecret.length != 32)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Validate receiving address
if (foreignBlockchainReceivingAccountInfo == null || foreignBlockchainReceivingAccountInfo.length != 20)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Make sure the receiving address isn't a QORT address, given that we can share the same field for both QORT and foreign blockchains
if (Crypto.isValidAddress(foreignBlockchainReceivingAccountInfo))
if (Base58.encode(foreignBlockchainReceivingAccountInfo).startsWith("Q"))
// This is likely a QORT address, not a foreign blockchain
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Use secret-A to redeem P2SH-A
Bitcoiny bitcoiny = (Bitcoiny) acct.getBlockchain();
if (bitcoiny.getClass() == Bitcoin.class) {
LOGGER.info("Redeeming a Bitcoin HTLC is not yet supported");
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
int lockTime = crossChainTradeData.lockTimeA;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(crossChainTradeData.partnerForeignPKH, lockTime, crossChainTradeData.creatorForeignPKH, crossChainTradeData.hashOfSecretA);
String p2shAddressA = bitcoiny.deriveP2shAddress(redeemScriptA);
LOGGER.info(String.format("Redeeming P2SH address: %s", p2shAddressA));
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTime, crossChainTradeData.tradeTimeout);
long p2shFee = bitcoiny.getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(bitcoiny.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// P2SH-A suddenly not funded? Our best bet at this point is to hope for AT auto-refund
return false;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Double-check that we have redeemed P2SH-A...
return false;
case REFUND_IN_PROGRESS:
case REFUNDED:
// Wait for AT to auto-refund
return false;
case FUNDED: {
Coin redeemAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey redeemKey = ECKey.fromPrivate(decodedTradePrivateKey);
List<TransactionOutput> fundingOutputs = bitcoiny.getUnspentOutputs(p2shAddressA);
Transaction p2shRedeemTransaction = BitcoinyHTLC.buildRedeemTransaction(bitcoiny.getNetworkParameters(), redeemAmount, redeemKey,
fundingOutputs, redeemScriptA, decodedSecret, foreignBlockchainReceivingAccountInfo);
bitcoiny.broadcastTransaction(p2shRedeemTransaction);
LOGGER.info(String.format("P2SH address %s redeemed!", p2shAddressA));
return true;
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
return false;
}
@POST
@Path("/refund/{ataddress}")
@Operation(
summary = "Refunds HTLC associated with supplied AT",
description = "To be used by a QORT buyer (Alice) who needs to refund their LTC/DOGE/etc that is stuck in a P2SH.<br>" +
"This requires Alice's trade bot data to be present in the database for this AT.<br>" +
"It will fail if it's already redeemed by the seller, or if the lockTime (60 minutes) hasn't passed yet.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean refundHtlc(@PathParam("ataddress") String atAddress) {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
if (tradeBotData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
if (tradeBotData.getForeignKey() == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Determine foreign blockchain receive address for refund
Bitcoiny bitcoiny = (Bitcoiny) acct.getBlockchain();
String receiveAddress = bitcoiny.getUnusedReceiveAddress(tradeBotData.getForeignKey());
return this.doRefundHtlc(atAddress, receiveAddress);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
}
@POST
@Path("/refundAll")
@Operation(
summary = "Refunds HTLC for all applicable ATs in tradebot data",
description = "To be used by a QORT buyer (Alice) who needs to refund their LTC/DOGE/etc proceeds that are stuck in P2SH transactions.<br>" +
"This requires Alice's trade bot data to be present in the database for this AT.<br>" +
"It will fail if it's already redeemed by the seller, or if the lockTime (60 minutes) hasn't passed yet.",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN})
public boolean refundAllHtlc() {
Security.checkApiCallAllowed(request);
boolean success = false;
try (final Repository repository = RepositoryManager.getRepository()) {
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
for (TradeBotData tradeBotData : allTradeBotData) {
String atAddress = tradeBotData.getAtAddress();
if (atAddress == null) {
LOGGER.info("Missing AT address in tradebot data", atAddress);
continue;
}
String tradeState = tradeBotData.getState();
if (tradeState == null) {
LOGGER.info("Missing trade state for AT {}", atAddress);
continue;
}
if (tradeState.startsWith("BOB")) {
LOGGER.info("AT {} isn't refundable because it is a sell order", atAddress);
continue;
}
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null) {
LOGGER.info("Couldn't find AT with address {}", atAddress);
continue;
}
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null) {
continue;
}
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null) {
LOGGER.info("Couldn't find crosschain trade data for AT {}", atAddress);
continue;
}
if (tradeBotData.getForeignKey() == null) {
LOGGER.info("Couldn't find foreign key for AT {}", atAddress);
continue;
}
try {
// Determine foreign blockchain receive address for refund
Bitcoiny bitcoiny = (Bitcoiny) acct.getBlockchain();
String receivingAddress = bitcoiny.getUnusedReceiveAddress(tradeBotData.getForeignKey());
LOGGER.info("Attempting to refund P2SH balance associated with AT {}...", atAddress);
boolean refunded = this.doRefundHtlc(atAddress, receivingAddress);
if (refunded) {
LOGGER.info("Refunded P2SH balance associated with AT {}", atAddress);
success = true;
}
else {
LOGGER.info("Couldn't refund P2SH balance associated with AT {}. Already redeemed?", atAddress);
}
} catch (ApiException | ForeignBlockchainException e) {
LOGGER.info("Couldn't refund P2SH balance associated with AT {}. Missing data?", atAddress);
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
return success;
}
private boolean doRefundHtlc(String atAddress, String receiveAddress) {
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
ACCT acct = SupportedBlockchain.getAcctByCodeHash(atData.getCodeHash());
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
if (crossChainTradeData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// If the AT is "finished" then it will have a zero balance
// In these cases we should avoid HTLC refunds if tbe QORT haven't been returned to the seller
if (atData.getIsFinished() && crossChainTradeData.mode != AcctMode.REFUNDED && crossChainTradeData.mode != AcctMode.CANCELLED) {
LOGGER.info(String.format("Skipping AT %s because the QORT has already been redemed", atAddress));
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
TradeBotData tradeBotData = allTradeBotData.stream().filter(tradeBotDataItem -> tradeBotDataItem.getAtAddress().equals(atAddress)).findFirst().orElse(null);
if (tradeBotData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
Bitcoiny bitcoiny = (Bitcoiny) acct.getBlockchain();
if (bitcoiny.getClass() == Bitcoin.class) {
LOGGER.info("Refunding a Bitcoin HTLC is not yet supported");
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
int lockTime = tradeBotData.getLockTimeA();
// We can't refund P2SH-A until lockTime-A has passed
if (NTP.getTime() <= lockTime * 1000L)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
// We can't refund P2SH-A until median block time has passed lockTime-A (see BIP113)
int medianBlockTime = bitcoiny.getMedianBlockTime();
if (medianBlockTime <= lockTime)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTime, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = bitcoiny.deriveP2shAddress(redeemScriptA);
LOGGER.info(String.format("Refunding P2SH address: %s", p2shAddressA));
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTime, crossChainTradeData.tradeTimeout);
long p2shFee = bitcoiny.getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(bitcoiny.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// Still waiting for P2SH-A to be funded...
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_TOO_SOON);
case REDEEM_IN_PROGRESS:
case REDEEMED:
case REFUND_IN_PROGRESS:
case REFUNDED:
// Too late!
return false;
case FUNDED:{
Coin refundAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey refundKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = bitcoiny.getUnspentOutputs(p2shAddressA);
// Validate the destination foreign blockchain address
Address receiving = Address.fromString(bitcoiny.getNetworkParameters(), receiveAddress);
if (receiving.getOutputScriptType() != Script.ScriptType.P2PKH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
Transaction p2shRefundTransaction = BitcoinyHTLC.buildRefundTransaction(bitcoiny.getNetworkParameters(), refundAmount, refundKey,
fundingOutputs, redeemScriptA, lockTime, receiving.getHash());
bitcoiny.broadcastTransaction(p2shRefundTransaction);
return true;
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, e);
}
return false;
}
private long calcFeeTimestamp(int lockTimeA, int tradeTimeout) {
return (lockTimeA - tradeTimeout * 60) * 1000L;
}
}

View File

@@ -0,0 +1,145 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainSecretRequest;
import org.qortal.crosschain.AcctMode;
import org.qortal.crosschain.LitecoinACCTv1;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.MessageTransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import java.util.Arrays;
import java.util.Random;
@Path("/crosschain/LitecoinACCTv1")
@Tag(name = "Cross-Chain (LitecoinACCTv1)")
public class CrossChainLitecoinACCTv1Resource {
@Context
HttpServletRequest request;
@POST
@Path("/redeemmessage")
@Operation(
summary = "Signs and broadcasts a 'redeem' MESSAGE transaction that sends secrets to AT, releasing funds to partner",
description = "Specify address of cross-chain AT that needs to be messaged, Alice's trade private key, the 32-byte secret,<br>"
+ "and an address for receiving QORT from AT. All of these can be found in Alice's trade bot data.<br>"
+ "AT needs to be in 'trade' mode. Messages sent to an AT in any other mode will be ignored, but still cost fees to send!<br>"
+ "You need to use the private key that the AT considers the trade 'partner' otherwise the MESSAGE transaction will be invalid.",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = CrossChainSecretRequest.class
)
)
),
responses = {
@ApiResponse(
content = @Content(
schema = @Schema(
type = "string"
)
)
)
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_DATA, ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public boolean buildRedeemMessage(CrossChainSecretRequest secretRequest) {
Security.checkApiCallAllowed(request);
byte[] partnerPrivateKey = secretRequest.partnerPrivateKey;
if (partnerPrivateKey == null || partnerPrivateKey.length != Transformer.PRIVATE_KEY_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
if (secretRequest.atAddress == null || !Crypto.isValidAtAddress(secretRequest.atAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (secretRequest.secret == null || secretRequest.secret.length != LitecoinACCTv1.SECRET_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
if (secretRequest.receivingAddress == null || !Crypto.isValidAddress(secretRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
ATData atData = fetchAtDataWithChecking(repository, secretRequest.atAddress);
CrossChainTradeData crossChainTradeData = LitecoinACCTv1.getInstance().populateTradeData(repository, atData);
if (crossChainTradeData.mode != AcctMode.TRADING)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] partnerPublicKey = new PrivateKeyAccount(null, partnerPrivateKey).getPublicKey();
String partnerAddress = Crypto.toAddress(partnerPublicKey);
// MESSAGE must come from address that AT considers trade partner
if (!crossChainTradeData.qortalPartnerAddress.equals(partnerAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
// Good to make MESSAGE
byte[] messageData = LitecoinACCTv1.buildRedeemMessage(secretRequest.secret, secretRequest.receivingAddress);
PrivateKeyAccount sender = new PrivateKeyAccount(repository, partnerPrivateKey);
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, secretRequest.atAddress, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.TRANSACTION_INVALID);
return true;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
private ATData fetchAtDataWithChecking(Repository repository, String atAddress) throws DataException {
ATData atData = repository.getATRepository().fromATAddress(atAddress);
if (atData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Must be correct AT - check functionality using code hash
if (!Arrays.equals(atData.getCodeHash(), LitecoinACCTv1.CODE_BYTES_HASH))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// No point sending message to AT that's finished
if (atData.getIsFinished())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
return atData;
}
}

View File

@@ -22,9 +22,9 @@ import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.LitecoinSendRequest;
import org.qortal.crosschain.BitcoinyTransaction;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.Litecoin;
import org.qortal.crosschain.SimpleTransaction;
@Path("/crosschain/ltc")
@Tag(name = "Cross-Chain (Litecoin)")
@@ -89,12 +89,12 @@ public class CrossChainLitecoinResource {
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = BitcoinyTransaction.class ) ) )
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
public List<BitcoinyTransaction> getLitecoinWalletTransactions(String key58) {
public List<SimpleTransaction> getLitecoinWalletTransactions(String key58) {
Security.checkApiCallAllowed(request);
Litecoin litecoin = Litecoin.getInstance();

View File

@@ -255,13 +255,19 @@ public class CrossChainResource {
description = "foreign blockchain",
example = "LITECOIN",
schema = @Schema(implementation = SupportedBlockchain.class)
) @PathParam("blockchain") SupportedBlockchain foreignBlockchain) {
) @PathParam("blockchain") SupportedBlockchain foreignBlockchain,
@Parameter(
description = "Maximum number of trades to include in price calculation",
example = "10",
schema = @Schema(type = "integer", defaultValue = "10")
) @QueryParam("maxtrades") Integer maxtrades) {
// foreignBlockchain is required
if (foreignBlockchain == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// We want both a minimum of 5 trades and enough trades to span at least 4 hours
int minimumCount = 5;
int maximumCount = maxtrades != null ? maxtrades : 10;
long minimumPeriod = 4 * 60 * 60 * 1000L; // ms
Boolean isFinished = Boolean.TRUE;
@@ -276,7 +282,7 @@ public class CrossChainResource {
ACCT acct = acctInfo.getValue().get();
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStatesQuorum(codeHash,
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumCount, minimumPeriod);
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumCount, maximumCount, minimumPeriod);
for (ATStateData atState : atStates) {
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atState);

View File

@@ -107,7 +107,7 @@ public class CrossChainTradeBotResource {
)
}
)
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_CRITERIA, ApiError.INSUFFICIENT_BALANCE, ApiError.REPOSITORY_ISSUE})
@ApiErrors({ApiError.INVALID_PUBLIC_KEY, ApiError.INVALID_ADDRESS, ApiError.INVALID_CRITERIA, ApiError.INSUFFICIENT_BALANCE, ApiError.REPOSITORY_ISSUE, ApiError.ORDER_SIZE_TOO_SMALL})
@SuppressWarnings("deprecation")
public String tradeBotCreator(TradeBotCreateRequest tradeBotCreateRequest) {
Security.checkApiCallAllowed(request);
@@ -128,10 +128,13 @@ public class CrossChainTradeBotResource {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
if (tradeBotCreateRequest.foreignAmount == null || tradeBotCreateRequest.foreignAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ORDER_SIZE_TOO_SMALL);
if (tradeBotCreateRequest.foreignAmount < foreignBlockchain.getMinimumOrderAmount())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ORDER_SIZE_TOO_SMALL);
if (tradeBotCreateRequest.qortAmount <= 0 || tradeBotCreateRequest.fundingQortAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ORDER_SIZE_TOO_SMALL);
try (final Repository repository = RepositoryManager.getRepository()) {
// Do some simple checking first
@@ -283,4 +286,4 @@ public class CrossChainTradeBotResource {
return atData;
}
}
}

View File

@@ -0,0 +1,298 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.ArraySchema;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.qortal.api.*;
import org.qortal.api.model.AddressListRequest;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.list.ResourceListManager;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.*;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
@Path("/lists")
@Tag(name = "Lists")
public class ListsResource {
@Context
HttpServletRequest request;
@POST
@Path("/blacklist/address/{address}")
@Operation(
summary = "Add a QORT address to the local blacklist",
responses = {
@ApiResponse(
description = "Returns true on success, or an exception on failure",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN, ApiError.REPOSITORY_ISSUE})
public String addAddressToBlacklist(@PathParam("address") String address) {
Security.checkApiCallAllowed(request);
if (!Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
AccountData accountData = repository.getAccountRepository().getAccount(address);
// Not found?
if (accountData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Valid address, so go ahead and blacklist it
boolean success = ResourceListManager.getInstance().addAddressToBlacklist(address, true);
return success ? "true" : "false";
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@POST
@Path("/blacklist/addresses")
@Operation(
summary = "Add one or more QORT addresses to the local blacklist",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = AddressListRequest.class
)
)
),
responses = {
@ApiResponse(
description = "Returns true if all addresses were processed, false if any couldn't be " +
"processed, or an exception on failure. If false or an exception is returned, " +
"the list will not be updated, and the request will need to be re-issued.",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN, ApiError.REPOSITORY_ISSUE})
public String addAddressesToBlacklist(AddressListRequest addressListRequest) {
Security.checkApiCallAllowed(request);
if (addressListRequest == null || addressListRequest.addresses == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
int successCount = 0;
int errorCount = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
for (String address : addressListRequest.addresses) {
if (!Crypto.isValidAddress(address)) {
errorCount++;
continue;
}
AccountData accountData = repository.getAccountRepository().getAccount(address);
// Not found?
if (accountData == null) {
errorCount++;
continue;
}
// Valid address, so go ahead and blacklist it
boolean success = ResourceListManager.getInstance().addAddressToBlacklist(address, false);
if (success) {
successCount++;
}
else {
errorCount++;
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
if (successCount > 0 && errorCount == 0) {
// All were successful, so save the blacklist
ResourceListManager.getInstance().saveBlacklist();
return "true";
}
else {
// Something went wrong, so revert
ResourceListManager.getInstance().revertBlacklist();
return "false";
}
}
@DELETE
@Path("/blacklist/address/{address}")
@Operation(
summary = "Remove a QORT address from the local blacklist",
responses = {
@ApiResponse(
description = "Returns true on success, or an exception on failure",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN, ApiError.REPOSITORY_ISSUE})
public String removeAddressFromBlacklist(@PathParam("address") String address) {
Security.checkApiCallAllowed(request);
if (!Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
AccountData accountData = repository.getAccountRepository().getAccount(address);
// Not found?
if (accountData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Valid address, so go ahead and blacklist it
boolean success = ResourceListManager.getInstance().removeAddressFromBlacklist(address, true);
return success ? "true" : "false";
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@DELETE
@Path("/blacklist/addresses")
@Operation(
summary = "Remove one or more QORT addresses from the local blacklist",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = AddressListRequest.class
)
)
),
responses = {
@ApiResponse(
description = "Returns true if all addresses were processed, false if any couldn't be " +
"processed, or an exception on failure. If false or an exception is returned, " +
"the list will not be updated, and the request will need to be re-issued.",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN, ApiError.REPOSITORY_ISSUE})
public String removeAddressesFromBlacklist(AddressListRequest addressListRequest) {
Security.checkApiCallAllowed(request);
if (addressListRequest == null || addressListRequest.addresses == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
int successCount = 0;
int errorCount = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
for (String address : addressListRequest.addresses) {
if (!Crypto.isValidAddress(address)) {
errorCount++;
continue;
}
AccountData accountData = repository.getAccountRepository().getAccount(address);
// Not found?
if (accountData == null) {
errorCount++;
continue;
}
// Valid address, so go ahead and blacklist it
// Don't save as we will do this at the end of the process
boolean success = ResourceListManager.getInstance().removeAddressFromBlacklist(address, false);
if (success) {
successCount++;
}
else {
errorCount++;
}
}
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
if (successCount > 0 && errorCount == 0) {
// All were successful, so save the blacklist
ResourceListManager.getInstance().saveBlacklist();
return "true";
}
else {
// Something went wrong, so revert
ResourceListManager.getInstance().revertBlacklist();
return "false";
}
}
@GET
@Path("/blacklist/addresses")
@Operation(
summary = "Fetch the list of blacklisted addresses",
responses = {
@ApiResponse(
description = "A JSON array of addresses",
content = @Content(mediaType = MediaType.APPLICATION_JSON, array = @ArraySchema(schema = @Schema(implementation = String.class)))
)
}
)
public String getAddressBlacklist() {
Security.checkApiCallAllowed(request);
return ResourceListManager.getInstance().getBlacklistJSONString();
}
@GET
@Path("/blacklist/address/{address}")
@Operation(
summary = "Check if an address is present in the local blacklist",
responses = {
@ApiResponse(
description = "Returns true or false if the list was queried, or an exception on failure",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
@ApiErrors({ApiError.INVALID_ADDRESS, ApiError.ADDRESS_UNKNOWN, ApiError.REPOSITORY_ISSUE})
public String checkAddressInBlacklist(@PathParam("address") String address) {
Security.checkApiCallAllowed(request);
if (!Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
AccountData accountData = repository.getAccountRepository().getAccount(address);
// Not found?
if (accountData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ADDRESS_UNKNOWN);
// Valid address, so go ahead and blacklist it
boolean blacklisted = ResourceListManager.getInstance().isAddressInBlacklist(address);
return blacklisted ? "true" : "false";
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
}

View File

@@ -321,7 +321,7 @@ public class PeersResource {
boolean force = true;
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = Synchronizer.getInstance().fetchSummariesFromCommonBlock(repository, targetPeer, ourInitialHeight, force, peerBlockSummaries);
SynchronizationResult findCommonBlockResult = Synchronizer.getInstance().fetchSummariesFromCommonBlock(repository, targetPeer, ourInitialHeight, force, peerBlockSummaries, true);
if (findCommonBlockResult != SynchronizationResult.OK)
return null;

View File

@@ -33,7 +33,6 @@ import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.transform.transaction.TransactionTransformer.Transformation;
import org.qortal.utils.BIP39;
import org.qortal.utils.Base58;
import com.google.common.hash.HashCode;
@@ -195,123 +194,6 @@ public class UtilsResource {
return Base58.encode(random);
}
@GET
@Path("/mnemonic")
@Operation(
summary = "Generate 12-word BIP39 mnemonic",
description = "Optionally pass 16-byte, base58-encoded entropy or entropy will be internally generated.<br>"
+ "Example entropy input: YcVfxkQb6JRzqk5kF2tNLv",
responses = {
@ApiResponse(
description = "mnemonic",
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string"
)
)
)
}
)
@ApiErrors({ApiError.NON_PRODUCTION, ApiError.INVALID_DATA})
public String getMnemonic(@QueryParam("entropy") String suppliedEntropy) {
if (Settings.getInstance().isApiRestricted())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.NON_PRODUCTION);
/*
* BIP39 word lists have 2048 entries so can be represented by 11 bits.
* UUID (128bits) and another 4 bits gives 132 bits.
* 132 bits, divided by 11, gives 12 words.
*/
byte[] entropy;
if (suppliedEntropy != null) {
// Use caller-supplied entropy input
try {
entropy = Base58.decode(suppliedEntropy);
} catch (NumberFormatException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
}
// Must be 16-bytes
if (entropy.length != 16)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
} else {
// Generate entropy internally
UUID uuid = UUID.randomUUID();
byte[] uuidMSB = Longs.toByteArray(uuid.getMostSignificantBits());
byte[] uuidLSB = Longs.toByteArray(uuid.getLeastSignificantBits());
entropy = Bytes.concat(uuidMSB, uuidLSB);
}
// Use SHA256 to generate more bits
byte[] hash = Crypto.digest(entropy);
// Append first 4 bits from hash to end. (Actually 8 bits but we only use 4).
byte checksum = (byte) (hash[0] & 0xf0);
entropy = Bytes.concat(entropy, new byte[] {
checksum
});
return BIP39.encode(entropy, "en");
}
@POST
@Path("/mnemonic")
@Operation(
summary = "Calculate binary entropy from 12-word BIP39 mnemonic",
description = "Returns the base58-encoded binary form, or \"false\" if mnemonic is invalid.",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string"
)
)
),
responses = {
@ApiResponse(
description = "entropy in base58",
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string"
)
)
)
}
)
@ApiErrors({ApiError.NON_PRODUCTION})
public String fromMnemonic(String mnemonic) {
if (Settings.getInstance().isApiRestricted())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.NON_PRODUCTION);
if (mnemonic.isEmpty())
return "false";
// Strip leading/trailing whitespace if any
mnemonic = mnemonic.trim();
String[] phraseWords = mnemonic.split(" ");
if (phraseWords.length != 12)
return "false";
// Convert BIP39 mnemonic to binary
byte[] binary = BIP39.decode(phraseWords, "en");
if (binary == null)
return "false";
byte[] entropy = Arrays.copyOf(binary, 16); // 132 bits is 16.5 bytes, but we're discarding checksum nybble
byte checksumNybble = (byte) (binary[16] & 0xf0);
byte[] checksum = Crypto.digest(entropy);
if (checksumNybble != (byte) (checksum[0] & 0xf0))
return "false";
return Base58.encode(entropy);
}
@POST
@Path("/privatekey")
@Operation(

View File

@@ -115,6 +115,9 @@ public class ChatMessagesWebSocket extends ApiWebSocket {
}
private void onNotify(Session session, ChatTransactionData chatTransactionData, List<String> involvingAddresses) {
if (chatTransactionData == null)
return;
// We only want direct/non-group messages where sender/recipient match our addresses
String recipient = chatTransactionData.getRecipient();
if (recipient == null)

View File

@@ -1,5 +1,7 @@
package org.qortal.at;
import java.util.Arrays;
import java.util.Collections;
import java.util.List;
import org.ciyam.at.MachineState;
@@ -56,12 +58,12 @@ public class AT {
this.atData = new ATData(atAddress, creatorPublicKey, creation, machineState.version, assetId, codeBytes, codeHash,
machineState.isSleeping(), machineState.getSleepUntilHeight(), machineState.isFinished(), machineState.hadFatalError(),
machineState.isFrozen(), machineState.getFrozenBalance());
machineState.isFrozen(), machineState.getFrozenBalance(), null);
byte[] stateData = machineState.toBytes();
byte[] stateHash = Crypto.digest(stateData);
this.atStateData = new ATStateData(atAddress, height, stateData, stateHash, 0L, true);
this.atStateData = new ATStateData(atAddress, height, stateData, stateHash, 0L, true, null);
}
// Getters / setters
@@ -84,13 +86,28 @@ public class AT {
this.repository.getATRepository().delete(this.atData.getATAddress());
}
/**
* Potentially execute AT.
* <p>
* Note that sleep-until-message support might set/reset
* sleep-related flags/values.
* <p>
* {@link #getATStateData()} will return null if nothing happened.
* <p>
* @param blockHeight
* @param blockTimestamp
* @return AT-generated transactions, possibly empty
* @throws DataException
*/
public List<AtTransaction> run(int blockHeight, long blockTimestamp) throws DataException {
String atAddress = this.atData.getATAddress();
QortalATAPI api = new QortalATAPI(repository, this.atData, blockTimestamp);
QortalAtLoggerFactory loggerFactory = QortalAtLoggerFactory.getInstance();
byte[] codeBytes = this.atData.getCodeBytes();
if (!api.willExecute(blockHeight))
// this.atStateData will be null
return Collections.emptyList();
// Fetch latest ATStateData for this AT
ATStateData latestAtStateData = this.repository.getATRepository().getLatestATState(atAddress);
@@ -100,8 +117,10 @@ public class AT {
throw new IllegalStateException("No previous AT state data found");
// [Re]create AT machine state using AT state data or from scratch as applicable
byte[] codeBytes = this.atData.getCodeBytes();
MachineState state = MachineState.fromBytes(api, loggerFactory, latestAtStateData.getStateData(), codeBytes);
try {
api.preExecute(state);
state.execute();
} catch (Exception e) {
throw new DataException(String.format("Uncaught exception while running AT '%s'", atAddress), e);
@@ -109,9 +128,18 @@ public class AT {
byte[] stateData = state.toBytes();
byte[] stateHash = Crypto.digest(stateData);
long atFees = api.calcFinalFees(state);
this.atStateData = new ATStateData(atAddress, blockHeight, stateData, stateHash, atFees, false);
// Nothing happened?
if (state.getSteps() == 0 && Arrays.equals(stateHash, latestAtStateData.getStateHash()))
// We currently want to execute frozen ATs, to maintain backwards support.
if (state.isFrozen() == false)
// this.atStateData will be null
return Collections.emptyList();
long atFees = api.calcFinalFees(state);
Long sleepUntilMessageTimestamp = this.atData.getSleepUntilMessageTimestamp();
this.atStateData = new ATStateData(atAddress, blockHeight, stateData, stateHash, atFees, false, sleepUntilMessageTimestamp);
return api.getTransactions();
}
@@ -130,6 +158,10 @@ public class AT {
this.atData.setHadFatalError(state.hadFatalError());
this.atData.setIsFrozen(state.isFrozen());
this.atData.setFrozenBalance(state.getFrozenBalance());
// Special sleep-until-message support
this.atData.setSleepUntilMessageTimestamp(this.atStateData.getSleepUntilMessageTimestamp());
this.repository.getATRepository().save(this.atData);
}
@@ -157,6 +189,10 @@ public class AT {
this.atData.setHadFatalError(state.hadFatalError());
this.atData.setIsFrozen(state.isFrozen());
this.atData.setFrozenBalance(state.getFrozenBalance());
// Special sleep-until-message support
this.atData.setSleepUntilMessageTimestamp(previousStateData.getSleepUntilMessageTimestamp());
this.repository.getATRepository().save(this.atData);
}

View File

@@ -32,6 +32,7 @@ import org.qortal.group.Group;
import org.qortal.repository.ATRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.ATRepository.NextTransactionInfo;
import org.qortal.transaction.AtTransaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.utils.Base58;
@@ -74,8 +75,45 @@ public class QortalATAPI extends API {
return this.transactions;
}
public long calcFinalFees(MachineState state) {
return state.getSteps() * this.ciyamAtSettings.feePerStep;
public boolean willExecute(int blockHeight) throws DataException {
// Sleep-until-message/height checking
Long sleepUntilMessageTimestamp = this.atData.getSleepUntilMessageTimestamp();
if (sleepUntilMessageTimestamp != null) {
// Quicker to check height, if sleep-until-height also active
Integer sleepUntilHeight = this.atData.getSleepUntilHeight();
boolean wakeDueToHeight = sleepUntilHeight != null && sleepUntilHeight != 0 && blockHeight >= sleepUntilHeight;
boolean wakeDueToMessage = false;
if (!wakeDueToHeight) {
// No avoiding asking repository
Timestamp previousTxTimestamp = new Timestamp(sleepUntilMessageTimestamp);
NextTransactionInfo nextTransactionInfo = this.repository.getATRepository().findNextTransaction(this.atData.getATAddress(),
previousTxTimestamp.blockHeight,
previousTxTimestamp.transactionSequence);
wakeDueToMessage = nextTransactionInfo != null;
}
// Can we skip?
if (!wakeDueToHeight && !wakeDueToMessage)
return false;
}
return true;
}
public void preExecute(MachineState state) {
// Sleep-until-message/height checking
Long sleepUntilMessageTimestamp = this.atData.getSleepUntilMessageTimestamp();
if (sleepUntilMessageTimestamp != null) {
// We've passed checks, so clear sleep-related flags/values
this.setIsSleeping(state, false);
this.setSleepUntilHeight(state, 0);
this.atData.setSleepUntilMessageTimestamp(null);
}
}
// Inherited methods from CIYAM AT API
@@ -412,6 +450,10 @@ public class QortalATAPI extends API {
// Utility methods
public long calcFinalFees(MachineState state) {
return state.getSteps() * this.ciyamAtSettings.feePerStep;
}
/** Returns partial transaction signature, used to verify we're operating on the same transaction and not naively using block height & sequence. */
public static byte[] partialSignature(byte[] fullSignature) {
return Arrays.copyOfRange(fullSignature, 8, 32);
@@ -460,6 +502,15 @@ public class QortalATAPI extends API {
}
}
/*package*/ void sleepUntilMessageOrHeight(MachineState state, long txTimestamp, Long sleepUntilHeight) {
this.setIsSleeping(state, true);
this.atData.setSleepUntilMessageTimestamp(txTimestamp);
if (sleepUntilHeight != null)
this.setSleepUntilHeight(state, sleepUntilHeight.intValue());
}
/** Returns AT's account */
/* package */ Account getATAccount() {
return new Account(this.repository, this.atData.getATAddress());

View File

@@ -84,6 +84,43 @@ public enum QortalFunctionCode {
api.setB(state, bBytes);
}
},
/**
* Sleep AT until a new message arrives after 'tx-timestamp'.<br>
* <tt>0x0503 tx-timestamp</tt>
*/
SLEEP_UNTIL_MESSAGE(0x0503, 1, false) {
@Override
protected void postCheckExecute(FunctionData functionData, MachineState state, short rawFunctionCode) throws ExecutionException {
if (functionData.value1 <= 0)
return;
long txTimestamp = functionData.value1;
QortalATAPI api = (QortalATAPI) state.getAPI();
api.sleepUntilMessageOrHeight(state, txTimestamp, null);
}
},
/**
* Sleep AT until a new message arrives, after 'tx-timestamp', or height reached.<br>
* <tt>0x0504 tx-timestamp height</tt>
*/
SLEEP_UNTIL_MESSAGE_OR_HEIGHT(0x0504, 2, false) {
@Override
protected void postCheckExecute(FunctionData functionData, MachineState state, short rawFunctionCode) throws ExecutionException {
if (functionData.value1 <= 0)
return;
long txTimestamp = functionData.value1;
if (functionData.value2 <= 0)
return;
long sleepUntilHeight = functionData.value2;
QortalATAPI api = (QortalATAPI) state.getAPI();
api.sleepUntilMessageOrHeight(state, txTimestamp, sleepUntilHeight);
}
},
/**
* Convert address in B to 20-byte value in LSB of B1, and all of B2 & B3.<br>
* <tt>0x0510</tt>

View File

@@ -176,19 +176,26 @@ public class Block {
*
* @return account-level share "bin" from blockchain config, or null if founder / none found
*/
public AccountLevelShareBin getShareBin() {
public AccountLevelShareBin getShareBin(int blockHeight) {
if (this.isMinterFounder)
return null;
final int accountLevel = this.mintingAccountData.getLevel();
if (accountLevel <= 0)
return null;
return null; // level 0 isn't included in any share bins
final AccountLevelShareBin[] shareBinsByLevel = BlockChain.getInstance().getShareBinsByAccountLevel();
final BlockChain blockChain = BlockChain.getInstance();
final AccountLevelShareBin[] shareBinsByLevel = blockChain.getShareBinsByAccountLevel();
if (accountLevel > shareBinsByLevel.length)
return null;
return shareBinsByLevel[accountLevel];
if (blockHeight < blockChain.getShareBinFixHeight())
// Off-by-one bug still in effect
return shareBinsByLevel[accountLevel];
// level 1 stored at index 0, level 2 stored at index 1, etc.
return shareBinsByLevel[accountLevel-1];
}
public long distribute(long accountAmount, Map<String, Long> balanceChanges) {
@@ -225,7 +232,7 @@ public class Block {
// Other useful constants
private static final BigInteger MAX_DISTANCE;
public static final BigInteger MAX_DISTANCE;
static {
byte[] maxValue = new byte[Transformer.PUBLIC_KEY_LENGTH];
Arrays.fill(maxValue, (byte) 0xFF);
@@ -789,7 +796,9 @@ public class Block {
NumberFormat formatter = new DecimalFormat("0.###E0");
boolean isLogging = LOGGER.getLevel().isLessSpecificThan(Level.TRACE);
int blockCount = 0;
for (BlockSummaryData blockSummaryData : blockSummaries) {
blockCount++;
StringBuilder stringBuilder = isLogging ? new StringBuilder(512) : null;
if (isLogging)
@@ -818,11 +827,11 @@ public class Block {
parentHeight = blockSummaryData.getHeight();
parentBlockSignature = blockSummaryData.getSignature();
/* Potential future consensus change: only comparing the same number of blocks.
if (parentHeight >= maxHeight)
// After this timestamp, we only compare the same number of blocks
if (NTP.getTime() >= BlockChain.getInstance().getCalcChainWeightTimestamp() && parentHeight >= maxHeight)
break;
*/
}
LOGGER.trace(String.format("Chain weight calculation was based on %d blocks", blockCount));
return cumulativeWeight;
}
@@ -1238,12 +1247,13 @@ public class Block {
for (ATData atData : executableATs) {
AT at = new AT(this.repository, atData);
List<AtTransaction> atTransactions = at.run(this.blockData.getHeight(), this.blockData.getTimestamp());
ATStateData atStateData = at.getATStateData();
// Didn't execute? (e.g. sleeping)
if (atStateData == null)
continue;
allAtTransactions.addAll(atTransactions);
ATStateData atStateData = at.getATStateData();
this.ourAtStates.add(atStateData);
this.ourAtFees += atStateData.getFees();
}
@@ -1328,6 +1338,9 @@ public class Block {
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
Controller.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
}
protected void increaseAccountLevels() throws DataException {
@@ -1509,6 +1522,9 @@ public class Block {
public void orphan() throws DataException {
LOGGER.trace(() -> String.format("Orphaning block %d", this.blockData.getHeight()));
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
// Return AT fees and delete AT states from repository
orphanAtFeesAndStates();
@@ -1783,7 +1799,7 @@ public class Block {
// Find all accounts in share bin. getShareBin() returns null for minter accounts that are also founders, so they are effectively filtered out.
AccountLevelShareBin accountLevelShareBin = accountLevelShareBins.get(binIndex);
// Object reference compare is OK as all references are read-only from blockchain config.
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.getShareBin() == accountLevelShareBin).collect(Collectors.toList());
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.getShareBin(this.blockData.getHeight()) == accountLevelShareBin).collect(Collectors.toList());
// No online accounts in this bin? Skip to next one
if (binnedAccounts.isEmpty())
@@ -1981,4 +1997,38 @@ public class Block {
this.repository.getAccountRepository().tidy();
}
private void logDebugInfo() {
try {
// Avoid calculations if possible. We have to check against INFO here, since Level.isMoreSpecificThan() confusingly uses <= rather than just <
if (LOGGER.getLevel().isMoreSpecificThan(Level.INFO))
return;
if (this.repository == null || this.getMinter() == null || this.getBlockData() == null)
return;
int minterLevel = Account.getRewardShareEffectiveMintingLevel(this.repository, this.getMinter().getPublicKey());
LOGGER.debug(String.format("======= BLOCK %d (%.8s) =======", this.getBlockData().getHeight(), Base58.encode(this.getSignature())));
LOGGER.debug(String.format("Timestamp: %d", this.getBlockData().getTimestamp()));
LOGGER.debug(String.format("Minter level: %d", minterLevel));
LOGGER.debug(String.format("Online accounts: %d", this.getBlockData().getOnlineAccountsCount()));
LOGGER.debug(String.format("AT count: %d", this.getBlockData().getATCount()));
BlockSummaryData blockSummaryData = new BlockSummaryData(this.getBlockData());
if (this.getParent() == null || this.getParent().getSignature() == null || blockSummaryData == null || minterLevel == 0)
return;
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = calcBlockWeight(this.getParent().getHeight(), this.getParent().getSignature(), blockSummaryData);
BigInteger keyDistance = calcKeyDistance(this.getParent().getHeight(), this.getParent().getSignature(), blockSummaryData.getMinterPublicKey(), blockSummaryData.getMinterLevel());
NumberFormat formatter = new DecimalFormat("0.###E0");
LOGGER.debug(String.format("Key distance: %s", formatter.format(keyDistance)));
LOGGER.debug(String.format("Weight: %s", formatter.format(blockWeight)));
} catch (DataException e) {
LOGGER.info(() -> String.format("Unable to log block debugging info: %s", e.getMessage()));
}
}
}

View File

@@ -71,7 +71,9 @@ public class BlockChain {
public enum FeatureTrigger {
atFindNextTransactionFix,
newBlockSigHeight;
newBlockSigHeight,
shareBinFix,
calcChainWeightTimestamp;
}
/** Map of which blockchain features are enabled when (height/timestamp) */
@@ -381,6 +383,14 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.newBlockSigHeight.name()).intValue();
}
public int getShareBinFixHeight() {
return this.featureTriggers.get(FeatureTrigger.shareBinFix.name()).intValue();
}
public long getCalcChainWeightTimestamp() {
return this.featureTriggers.get(FeatureTrigger.calcChainWeightTimestamp.name()).longValue();
}
// More complex getters for aspects that change by height or timestamp
public long getRewardAtHeight(int ourHeight) {
@@ -496,28 +506,51 @@ public class BlockChain {
* @throws SQLException
*/
public static void validate() throws DataException {
// Check first block is Genesis Block
if (!isGenesisBlockValid())
rebuildBlockchain();
try (final Repository repository = RepositoryManager.getRepository()) {
boolean pruningEnabled = Settings.getInstance().isPruningEnabled();
BlockData chainTip = repository.getBlockRepository().getLastBlock();
boolean hasBlocks = (chainTip != null && chainTip.getHeight() > 1);
if (pruningEnabled && hasBlocks) {
// Pruning is enabled and we have blocks, so it's possible that the genesis block has been pruned
// It's best not to validate it, and there's no real need to
}
else {
// Check first block is Genesis Block
if (!isGenesisBlockValid()) {
rebuildBlockchain();
}
}
repository.checkConsistency();
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - 1440, 1);
// Set the number of blocks to validate based on the pruned state of the chain
// If pruned, subtract an extra 10 to allow room for error
int blocksToValidate = pruningEnabled ? Settings.getInstance().getPruneBlockLimit() - 10 : 1440;
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - blocksToValidate, 1);
BlockData detachedBlockData = repository.getBlockRepository().getDetachedBlockSignature(startHeight);
if (detachedBlockData != null) {
LOGGER.error(String.format("Block %d's reference does not match any block's signature", detachedBlockData.getHeight()));
// Wait for blockchain lock (whereas orphan() only tries to get lock)
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lock();
try {
LOGGER.info(String.format("Orphaning back to block %d", detachedBlockData.getHeight() - 1));
orphan(detachedBlockData.getHeight() - 1);
} finally {
blockchainLock.unlock();
// Orphan if we aren't a pruning node
if (!Settings.getInstance().isPruningEnabled()) {
// Wait for blockchain lock (whereas orphan() only tries to get lock)
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lock();
try {
LOGGER.info(String.format("Orphaning back to block %d", detachedBlockData.getHeight() - 1));
orphan(detachedBlockData.getHeight() - 1);
} finally {
blockchainLock.unlock();
}
}
else {
LOGGER.error(String.format("Not orphaning because we are in pruning mode. You may be on an " +
"invalid chain and should consider bootstrapping or re-syncing from genesis."));
}
}
}

View File

@@ -24,7 +24,6 @@ import java.util.Properties;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Function;
@@ -46,6 +45,7 @@ import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.controller.Synchronizer.SynchronizationResult;
import org.qortal.controller.repository.PruneManager;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.MintingAccountData;
@@ -95,7 +95,6 @@ import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.Base58;
import org.qortal.utils.ByteArray;
import org.qortal.utils.DaemonThreadFactory;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
@@ -259,17 +258,29 @@ public class Controller extends Thread {
throw new RuntimeException("Can't read build.properties resource", e);
}
// Determine build timestamp
String buildTimestampProperty = properties.getProperty("build.timestamp");
if (buildTimestampProperty == null)
if (buildTimestampProperty == null) {
throw new RuntimeException("Can't read build.timestamp from build.properties resource");
this.buildTimestamp = LocalDateTime.parse(buildTimestampProperty, DateTimeFormatter.ofPattern("yyyyMMddHHmmss")).toEpochSecond(ZoneOffset.UTC);
}
if (buildTimestampProperty.startsWith("$")) {
// Maven vars haven't been replaced - this was most likely built using an IDE, not via mvn package
this.buildTimestamp = System.currentTimeMillis();
buildTimestampProperty = "unknown";
} else {
this.buildTimestamp = LocalDateTime.parse(buildTimestampProperty, DateTimeFormatter.ofPattern("yyyyMMddHHmmss")).toEpochSecond(ZoneOffset.UTC);
}
LOGGER.info(String.format("Build timestamp: %s", buildTimestampProperty));
// Determine build version
String buildVersionProperty = properties.getProperty("build.version");
if (buildVersionProperty == null)
if (buildVersionProperty == null) {
throw new RuntimeException("Can't read build.version from build.properties resource");
}
if (buildVersionProperty.contains("${git.commit.id.abbrev}")) {
// Maven vars haven't been replaced - this was most likely built using an IDE, not via mvn package
buildVersionProperty = buildVersionProperty.replace("${git.commit.id.abbrev}", "debug");
}
this.buildVersion = VERSION_PREFIX + buildVersionProperty;
LOGGER.info(String.format("Build version: %s", this.buildVersion));
@@ -345,7 +356,7 @@ public class Controller extends Thread {
return this.savedArgs;
}
/* package */ static boolean isStopping() {
public static boolean isStopping() {
return isStopping;
}
@@ -403,6 +414,7 @@ public class Controller extends Thread {
try {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(getRepositoryUrl());
RepositoryManager.setRepositoryFactory(repositoryFactory);
RepositoryManager.prune();
} catch (DataException e) {
// If exception has no cause then repository is in use by some other process.
if (e.getCause() == null) {
@@ -498,9 +510,8 @@ public class Controller extends Thread {
final long repositoryBackupInterval = Settings.getInstance().getRepositoryBackupInterval();
final long repositoryCheckpointInterval = Settings.getInstance().getRepositoryCheckpointInterval();
ExecutorService trimExecutor = Executors.newCachedThreadPool(new DaemonThreadFactory());
trimExecutor.execute(new AtStatesTrimmer());
trimExecutor.execute(new OnlineAccountsSignaturesTrimmer());
// Start executor service for trimming or pruning
PruneManager.getInstance().start();
try {
while (!isStopping) {
@@ -585,13 +596,7 @@ public class Controller extends Thread {
Thread.interrupted();
// Fall-through to exit
} finally {
trimExecutor.shutdownNow();
try {
trimExecutor.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
}
PruneManager.getInstance().stop();
}
}
@@ -623,6 +628,11 @@ public class Controller extends Thread {
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || inferiorChainTips.contains(new ByteArray(peerChainTipData.getLastBlockSignature()));
};
public static final Predicate<Peer> hasOldVersion = peer -> {
final String minPeerVersion = Settings.getInstance().getMinPeerVersion();
return peer.isAtLeastVersion(minPeerVersion) == false;
};
private void potentiallySynchronize() throws InterruptedException {
// Already synchronizing via another thread?
if (this.isSynchronizing)
@@ -639,11 +649,15 @@ public class Controller extends Thread {
// Disregard peers that don't have a recent block
peers.removeIf(hasNoRecentBlock);
// Disregard peers that are on an old version
peers.removeIf(hasOldVersion);
checkRecoveryModeForPeers(peers);
if (recoveryMode) {
peers = Network.getInstance().getHandshakedPeers();
peers.removeIf(hasOnlyGenesisBlock);
peers.removeIf(hasMisbehaved);
peers.removeIf(hasOldVersion);
}
// Check we have enough peers to potentially synchronize
@@ -668,8 +682,8 @@ public class Controller extends Thread {
peers.removeIf(hasInferiorChainTip);
final int peersRemoved = peersBeforeComparison - peers.size();
if (peersRemoved > 0)
LOGGER.info(String.format("Ignoring %d peers on inferior chains. Peers remaining: %d", peersRemoved, peers.size()));
if (peersRemoved > 0 && peers.size() > 0)
LOGGER.debug(String.format("Ignoring %d peers on inferior chains. Peers remaining: %d", peersRemoved, peers.size()));
if (peers.isEmpty())
return;
@@ -678,7 +692,7 @@ public class Controller extends Thread {
StringBuilder finalPeersString = new StringBuilder();
for (Peer peer : peers)
finalPeersString = finalPeersString.length() > 0 ? finalPeersString.append(", ").append(peer) : finalPeersString.append(peer);
LOGGER.info(String.format("Choosing random peer from: [%s]", finalPeersString.toString()));
LOGGER.debug(String.format("Choosing random peer from: [%s]", finalPeersString.toString()));
}
// Pick random peer to sync with
@@ -701,6 +715,7 @@ public class Controller extends Thread {
hasStatusChanged = true;
}
}
peer.setSyncInProgress(true);
if (hasStatusChanged)
updateSysTray();
@@ -780,6 +795,7 @@ public class Controller extends Thread {
return syncResult;
} finally {
isSynchronizing = false;
peer.setSyncInProgress(false);
}
}
@@ -831,6 +847,7 @@ public class Controller extends Thread {
private void updateSysTray() {
if (NTP.getTime() == null) {
SysTray.getInstance().setToolTipText(Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_CLOCK"));
SysTray.getInstance().setTrayIcon(1);
return;
}
@@ -844,17 +861,25 @@ public class Controller extends Thread {
String actionText;
synchronized (this.syncLock) {
if (this.isMintingPossible)
if (this.isMintingPossible) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
else if (this.isSynchronizing)
SysTray.getInstance().setTrayIcon(2);
}
else if (this.isSynchronizing) {
actionText = String.format("%s - %d%%", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"), this.syncPercent);
else if (numberOfPeers < Settings.getInstance().getMinBlockchainPeers())
SysTray.getInstance().setTrayIcon(3);
}
else if (numberOfPeers < Settings.getInstance().getMinBlockchainPeers()) {
actionText = Translator.INSTANCE.translate("SysTray", "CONNECTING");
else
SysTray.getInstance().setTrayIcon(3);
}
else {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
SysTray.getInstance().setTrayIcon(4);
}
}
String tooltip = String.format("%s - %d %s - %s %d", actionText, numberOfPeers, connectionsText, heightText, height) + "\n" + String.format("Build version: %s", this.buildVersion);
String tooltip = String.format("%s - %d %s - %s %d", actionText, numberOfPeers, connectionsText, heightText, height) + "\n" + String.format("%s: %s", Translator.INSTANCE.translate("SysTray", "BUILD_VERSION"), this.buildVersion);
SysTray.getInstance().setToolTipText(tooltip);
this.callbackExecutor.execute(() -> {
@@ -874,14 +899,19 @@ public class Controller extends Thread {
List<TransactionData> transactions = repository.getTransactionRepository().getUnconfirmedTransactions();
int deletedCount = 0;
for (TransactionData transactionData : transactions) {
Transaction transaction = Transaction.fromData(repository, transactionData);
if (now >= transaction.getDeadline()) {
LOGGER.info(() -> String.format("Deleting expired, unconfirmed transaction %s", Base58.encode(transactionData.getSignature())));
LOGGER.debug(() -> String.format("Deleting expired, unconfirmed transaction %s", Base58.encode(transactionData.getSignature())));
repository.getTransactionRepository().delete(transactionData);
deletedCount++;
}
}
if (deletedCount > 0) {
LOGGER.info(String.format("Deleted %d expired, unconfirmed transaction%s", deletedCount, (deletedCount == 1 ? "" : "s")));
}
repository.saveChanges();
} catch (DataException e) {
@@ -1249,6 +1279,13 @@ public class Controller extends Thread {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight(), repository)) {
// If this is a pruned block, we likely only have partial data, so best not to sent it
blockData = null;
}
}
if (blockData == null) {
// We don't have this block
this.stats.getBlockMessageStats.unknownBlocks.getAndIncrement();
@@ -1370,6 +1407,14 @@ public class Controller extends Thread {
BlockData blockData = repository.getBlockRepository().fromReference(parentSignature);
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight(), repository)) {
// If this request contains a pruned block, we likely only have partial data, so best not to sent anything
// We always prune from the oldest first, so it's fine to just check the first block requested
blockData = null;
}
}
while (blockData != null && blockSummaries.size() < numberRequested) {
BlockSummaryData blockSummary = new BlockSummaryData(blockData);
blockSummaries.add(blockSummary);

View File

@@ -16,6 +16,7 @@ import org.qortal.account.Account;
import org.qortal.account.PublicKeyAccount;
import org.qortal.block.Block;
import org.qortal.block.Block.ValidationResult;
import org.qortal.block.BlockChain;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.CommonBlockData;
@@ -34,8 +35,10 @@ import org.qortal.network.message.Message.MessageType;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.transaction.Transaction;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
public class Synchronizer {
@@ -55,9 +58,9 @@ public class Synchronizer {
/** Maximum number of block signatures we ask from peer in one go */
private static final int MAXIMUM_REQUEST_SIZE = 200; // XXX move to Settings?
/** Number of retry attempts if a peer fails to respond with the requested data */
private static final int MAXIMUM_RETRIES = 2; // XXX move to Settings?
// Keep track of the size of the last re-org, so it can be logged
private int lastReorgSize;
private static Synchronizer instance;
@@ -111,6 +114,7 @@ public class Synchronizer {
LOGGER.debug(String.format("Searching for common blocks with %d peers...", peers.size()));
final long startTime = System.currentTimeMillis();
int commonBlocksFound = 0;
boolean wereNewRequestsMade = false;
for (Peer peer : peers) {
// Are we shutting down?
@@ -120,6 +124,7 @@ public class Synchronizer {
// Check if we can use the cached common block data, by comparing the peer's current chain tip against the peer's chain tip when we last found our common block
if (peer.canUseCachedCommonBlockData()) {
LOGGER.debug(String.format("Skipping peer %s because we already have the latest common block data in our cache. Cached common block sig is %.08s", peer, Base58.encode(peer.getCommonBlockData().getCommonBlockSummary().getSignature())));
commonBlocksFound++;
continue;
}
@@ -130,10 +135,15 @@ public class Synchronizer {
Synchronizer.getInstance().findCommonBlockWithPeer(peer, repository);
if (peer.getCommonBlockData() != null)
commonBlocksFound++;
// This round wasn't served entirely from the cache, so we may want to log the results
wereNewRequestsMade = true;
}
final long totalTimeTaken = System.currentTimeMillis() - startTime;
LOGGER.info(String.format("Finished searching for common blocks with %d peer%s. Found: %d. Total time taken: %d ms", peers.size(), (peers.size() != 1 ? "s" : ""), commonBlocksFound, totalTimeTaken));
if (wereNewRequestsMade) {
final long totalTimeTaken = System.currentTimeMillis() - startTime;
LOGGER.debug(String.format("Finished searching for common blocks with %d peer%s. Found: %d. Total time taken: %d ms", peers.size(), (peers.size() != 1 ? "s" : ""), commonBlocksFound, totalTimeTaken));
}
return SynchronizationResult.OK;
} finally {
@@ -171,7 +181,7 @@ public class Synchronizer {
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp()));
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, false, peerBlockSummaries);
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, false, peerBlockSummaries, false);
if (findCommonBlockResult != SynchronizationResult.OK) {
// Logging performed by fetchSummariesFromCommonBlock() above
peer.setCommonBlockData(null);
@@ -226,6 +236,10 @@ public class Synchronizer {
return peers;
}
// We will switch to a new chain weight consensus algorithm at a hard fork, so determine if this has happened yet
boolean usingSameLengthChainWeight = (NTP.getTime() >= BlockChain.getInstance().getCalcChainWeightTimestamp());
LOGGER.debug(String.format("Using %s chain weight consensus algorithm", (usingSameLengthChainWeight ? "same-length" : "variable-length")));
// Retrieve a list of unique common blocks from this list of peers
List<BlockSummaryData> commonBlocks = this.uniqueCommonBlocks(peers);
@@ -267,7 +281,7 @@ public class Synchronizer {
// Calculate the length of the shortest peer chain sharing this common block, including our chain
final int ourAdditionalBlocksAfterCommonBlock = ourHeight - commonBlockSummary.getHeight();
int minChainLength = this.calculateMinChainLength(commonBlockSummary, ourAdditionalBlocksAfterCommonBlock, peersSharingCommonBlock);
int minChainLength = this.calculateMinChainLengthOfPeers(peersSharingCommonBlock, commonBlockSummary);
// Fetch block summaries from each peer
for (Peer peer : peersSharingCommonBlock) {
@@ -277,7 +291,9 @@ public class Synchronizer {
return peers;
// Count the number of blocks this peer has beyond our common block
final int peerHeight = peer.getChainTipData().getLastHeight();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
final int peerHeight = peerChainTipData.getLastHeight();
final byte[] peerLastBlockSignature = peerChainTipData.getLastBlockSignature();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
// Limit the number of blocks we are comparing. FUTURE: we could request more in batches, but there may not be a case when this is needed
int summariesRequired = Math.min(peerAdditionalBlocksAfterCommonBlock, MAXIMUM_REQUEST_SIZE);
@@ -287,7 +303,7 @@ public class Synchronizer {
if (peer.canUseCachedCommonBlockData()) {
if (peer.getCommonBlockData().getBlockSummariesAfterCommonBlock() != null) {
if (peer.getCommonBlockData().getBlockSummariesAfterCommonBlock().size() == summariesRequired) {
LOGGER.debug(String.format("Using cached block summaries for peer %s", peer));
LOGGER.trace(String.format("Using cached block summaries for peer %s", peer));
useCachedSummaries = true;
}
}
@@ -297,49 +313,65 @@ public class Synchronizer {
if (summariesRequired > 0) {
LOGGER.trace(String.format("Requesting %d block summar%s from peer %s after common block %.8s. Peer height: %d", summariesRequired, (summariesRequired != 1 ? "ies" : "y"), peer, Base58.encode(commonBlockSummary.getSignature()), peerHeight));
List<BlockSummaryData> blockSummaries = this.getBlockSummaries(peer, commonBlockSummary.getSignature(), summariesRequired);
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(blockSummaries);
// Forget any cached summaries
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(null);
// Request new block summaries
List<BlockSummaryData> blockSummaries = this.getBlockSummaries(peer, commonBlockSummary.getSignature(), summariesRequired);
if (blockSummaries != null) {
LOGGER.trace(String.format("Peer %s returned %d block summar%s", peer, blockSummaries.size(), (blockSummaries.size() != 1 ? "ies" : "y")));
// We need to adjust minChainLength if peers fail to return all expected block summaries
if (blockSummaries.size() < summariesRequired) {
// This could mean that the peer has re-orged. But we still have the same common block, so it's safe to proceed with this set of signatures instead.
LOGGER.debug(String.format("Peer %s returned %d block summar%s instead of expected %d", peer, blockSummaries.size(), (blockSummaries.size() != 1 ? "ies" : "y"), summariesRequired));
// Update minChainLength if we have at least 1 block for this peer. If we don't have any blocks, this peer will be excluded from chain weight comparisons later in the process, so we shouldn't update minChainLength
if (blockSummaries.size() > 0)
minChainLength = blockSummaries.size();
}
if (blockSummaries.size() < summariesRequired)
// This could mean that the peer has re-orged. Exclude this peer until they return the summaries we expect.
LOGGER.debug(String.format("Peer %s returned %d block summar%s instead of expected %d - excluding them from this round", peer, blockSummaries.size(), (blockSummaries.size() != 1 ? "ies" : "y"), summariesRequired));
else if (blockSummaryWithSignature(peerLastBlockSignature, blockSummaries) == null)
// We don't have a block summary for the peer's reported chain tip, so should exclude it
LOGGER.debug(String.format("Peer %s didn't return a block summary with signature %.8s - excluding them from this round", peer, Base58.encode(peerLastBlockSignature)));
else
// All looks good, so store the retrieved block summaries in the peer's cache
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(blockSummaries);
}
} else {
// There are no block summaries after this common block
peer.getCommonBlockData().setBlockSummariesAfterCommonBlock(null);
}
}
// Reduce minChainLength if needed. If we don't have any blocks, this peer will be excluded from chain weight comparisons later in the process, so we shouldn't update minChainLength
List <BlockSummaryData> peerBlockSummaries = peer.getCommonBlockData().getBlockSummariesAfterCommonBlock();
if (peerBlockSummaries != null && peerBlockSummaries.size() > 0)
if (peerBlockSummaries.size() < minChainLength)
minChainLength = peerBlockSummaries.size();
}
// Fetch our corresponding block summaries. Limit to MAXIMUM_REQUEST_SIZE, in order to make the comparison fairer, as peers have been limited too
final int ourSummariesRequired = Math.min(ourAdditionalBlocksAfterCommonBlock, MAXIMUM_REQUEST_SIZE);
LOGGER.trace(String.format("About to fetch our block summaries from %d to %d. Our height: %d", commonBlockSummary.getHeight() + 1, commonBlockSummary.getHeight() + ourSummariesRequired, ourHeight));
List<BlockSummaryData> ourBlockSummaries = repository.getBlockRepository().getBlockSummaries(commonBlockSummary.getHeight() + 1, commonBlockSummary.getHeight() + ourSummariesRequired);
if (ourBlockSummaries.isEmpty())
if (ourBlockSummaries.isEmpty()) {
LOGGER.debug(String.format("We don't have any block summaries so can't compare our chain against peers with this common block. We can still compare them against each other."));
else
}
else {
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
// Reduce minChainLength if we have less summaries
if (ourBlockSummaries.size() < minChainLength)
minChainLength = ourBlockSummaries.size();
}
// Create array to hold peers for comparison
List<Peer> superiorPeersForComparison = new ArrayList<>();
// Calculate max height for chain weight comparisons
int maxHeightForChainWeightComparisons = commonBlockSummary.getHeight() + minChainLength;
// Calculate our chain weight
BigInteger ourChainWeight = BigInteger.valueOf(0);
if (ourBlockSummaries.size() > 0)
ourChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), ourBlockSummaries, minChainLength);
ourChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), ourBlockSummaries, maxHeightForChainWeightComparisons);
NumberFormat formatter = new DecimalFormat("0.###E0");
NumberFormat accurateFormatter = new DecimalFormat("0.################E0");
LOGGER.debug(String.format("Our chain weight based on %d blocks is %s", ourBlockSummaries.size(), formatter.format(ourChainWeight)));
LOGGER.debug(String.format("Our chain weight based on %d blocks is %s", (usingSameLengthChainWeight ? minChainLength : ourBlockSummaries.size()), formatter.format(ourChainWeight)));
LOGGER.debug(String.format("Listing peers with common block %.8s...", Base58.encode(commonBlockSummary.getSignature())));
for (Peer peer : peersSharingCommonBlock) {
@@ -358,10 +390,10 @@ public class Synchronizer {
populateBlockSummariesMinterLevels(repository, peerBlockSummariesAfterCommonBlock);
// Calculate cumulative chain weight of this blockchain subset, from common block to highest mutual block held by all peers in this group.
LOGGER.debug(String.format("About to calculate chain weight based on %d blocks for peer %s with common block %.8s (peer has %d blocks after common block)", peerBlockSummariesAfterCommonBlock.size(), peer, Base58.encode(commonBlockSummary.getSignature()), peerAdditionalBlocksAfterCommonBlock));
BigInteger peerChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), peerBlockSummariesAfterCommonBlock, minChainLength);
LOGGER.debug(String.format("About to calculate chain weight based on %d blocks for peer %s with common block %.8s (peer has %d blocks after common block)", (usingSameLengthChainWeight ? minChainLength : peerBlockSummariesAfterCommonBlock.size()), peer, Base58.encode(commonBlockSummary.getSignature()), peerAdditionalBlocksAfterCommonBlock));
BigInteger peerChainWeight = Block.calcChainWeight(commonBlockSummary.getHeight(), commonBlockSummary.getSignature(), peerBlockSummariesAfterCommonBlock, maxHeightForChainWeightComparisons);
peer.getCommonBlockData().setChainWeight(peerChainWeight);
LOGGER.debug(String.format("Chain weight of peer %s based on %d blocks (%d - %d) is %s", peer, peerBlockSummariesAfterCommonBlock.size(), peerBlockSummariesAfterCommonBlock.get(0).getHeight(), peerBlockSummariesAfterCommonBlock.get(peerBlockSummariesAfterCommonBlock.size()-1).getHeight(), formatter.format(peerChainWeight)));
LOGGER.debug(String.format("Chain weight of peer %s based on %d blocks (%d - %d) is %s", peer, (usingSameLengthChainWeight ? minChainLength : peerBlockSummariesAfterCommonBlock.size()), peerBlockSummariesAfterCommonBlock.get(0).getHeight(), peerBlockSummariesAfterCommonBlock.get(peerBlockSummariesAfterCommonBlock.size()-1).getHeight(), formatter.format(peerChainWeight)));
// Compare against our chain - if our blockchain has greater weight then don't synchronize with peer (or any others in this group)
if (ourChainWeight.compareTo(peerChainWeight) > 0) {
@@ -370,8 +402,8 @@ public class Synchronizer {
peers.remove(peer);
}
else {
// Our chain is inferior
LOGGER.debug(String.format("Peer %s is on a better chain to us. We will compare the other peers sharing this common block against each other, and drop all peers sharing higher common blocks.", peer));
// Our chain is inferior or equal
LOGGER.debug(String.format("Peer %s is on an equal or better chain to us. We will compare the other peers sharing this common block against each other, and drop all peers sharing higher common blocks.", peer));
dropPeersAfterCommonBlockHeight = commonBlockSummary.getHeight();
superiorPeersForComparison.add(peer);
}
@@ -393,6 +425,9 @@ public class Synchronizer {
peers.remove(peer);
}
}
// FUTURE: we may want to prefer peers with additional blocks, and compare the additional blocks against each other.
// This would fast track us to the best candidate for the latest block.
// Right now, peers with the exact same chain as us are treated equally to those with an additional block.
}
}
@@ -411,33 +446,39 @@ public class Synchronizer {
for (Peer peer : peers) {
if (peer.getCommonBlockData() != null && peer.getCommonBlockData().getCommonBlockSummary() != null) {
LOGGER.debug(String.format("Peer %s has common block %.8s", peer, Base58.encode(peer.getCommonBlockData().getCommonBlockSummary().getSignature())));
LOGGER.trace(String.format("Peer %s has common block %.8s", peer, Base58.encode(peer.getCommonBlockData().getCommonBlockSummary().getSignature())));
BlockSummaryData commonBlockSummary = peer.getCommonBlockData().getCommonBlockSummary();
if (!commonBlocks.contains(commonBlockSummary))
commonBlocks.add(commonBlockSummary);
}
else {
LOGGER.debug(String.format("Peer %s has no common block data. Skipping...", peer));
LOGGER.trace(String.format("Peer %s has no common block data. Skipping...", peer));
}
}
return commonBlocks;
}
private int calculateMinChainLength(BlockSummaryData commonBlockSummary, int ourAdditionalBlocksAfterCommonBlock, List<Peer> peersSharingCommonBlock) {
// Calculate the length of the shortest peer chain sharing this common block, including our chain
int minChainLength = ourAdditionalBlocksAfterCommonBlock;
private int calculateMinChainLengthOfPeers(List<Peer> peersSharingCommonBlock, BlockSummaryData commonBlockSummary) {
// Calculate the length of the shortest peer chain sharing this common block
int minChainLength = 0;
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
if (peerAdditionalBlocksAfterCommonBlock < minChainLength)
if (peerAdditionalBlocksAfterCommonBlock < minChainLength || minChainLength == 0)
minChainLength = peerAdditionalBlocksAfterCommonBlock;
}
return minChainLength;
}
private BlockSummaryData blockSummaryWithSignature(byte[] signature, List<BlockSummaryData> blockSummaries) {
if (blockSummaries != null)
return blockSummaries.stream().filter(blockSummary -> Arrays.equals(blockSummary.getSignature(), signature)).findAny().orElse(null);
return null;
}
/**
* Attempt to synchronize blockchain with peer.
@@ -468,12 +509,25 @@ public class Synchronizer {
byte[] peersLastBlockSignature = peerChainTipData.getLastBlockSignature();
byte[] ourLastBlockSignature = ourLatestBlockData.getSignature();
LOGGER.debug(String.format("Synchronizing with peer %s at height %d, sig %.8s, ts %d; our height %d, sig %.8s, ts %d", peer,
String syncString = String.format("Synchronizing with peer %s at height %d, sig %.8s, ts %d; our height %d, sig %.8s, ts %d", peer,
peerHeight, Base58.encode(peersLastBlockSignature), peer.getChainTipData().getLastBlockTimestamp(),
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp()));
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp());
// If our latest block is very old, we should log that we're attempting to sync with a peer
// Otherwise, it can appear as though nothing is happening for a while after launch
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp != null && ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
LOGGER.info(syncString);
}
else {
LOGGER.debug(syncString);
}
// Reset last re-org size as we are starting a new sync round
this.lastReorgSize = 0;
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, force, peerBlockSummaries);
SynchronizationResult findCommonBlockResult = fetchSummariesFromCommonBlock(repository, peer, ourInitialHeight, force, peerBlockSummaries, true);
if (findCommonBlockResult != SynchronizationResult.OK) {
// Logging performed by fetchSummariesFromCommonBlock() above
// Clear our common block cache for this peer
@@ -529,10 +583,19 @@ public class Synchronizer {
// Commit
repository.saveChanges();
// Create string for logging
final BlockData newLatestBlockData = repository.getBlockRepository().getLastBlock();
LOGGER.info(String.format("Synchronized with peer %s to height %d, sig %.8s, ts: %d", peer,
String syncLog = String.format("Synchronized with peer %s to height %d, sig %.8s, ts: %d", peer,
newLatestBlockData.getHeight(), Base58.encode(newLatestBlockData.getSignature()),
newLatestBlockData.getTimestamp()));
newLatestBlockData.getTimestamp());
// Append re-org info
if (this.lastReorgSize > 0) {
syncLog = syncLog.concat(String.format(", size: %d", this.lastReorgSize));
}
// Log sync info
LOGGER.info(syncLog);
return SynchronizationResult.OK;
} finally {
@@ -555,7 +618,7 @@ public class Synchronizer {
* @throws DataException
* @throws InterruptedException
*/
public SynchronizationResult fetchSummariesFromCommonBlock(Repository repository, Peer peer, int ourHeight, boolean force, List<BlockSummaryData> blockSummariesFromCommon) throws DataException, InterruptedException {
public SynchronizationResult fetchSummariesFromCommonBlock(Repository repository, Peer peer, int ourHeight, boolean force, List<BlockSummaryData> blockSummariesFromCommon, boolean infoLogWhenNotFound) throws DataException, InterruptedException {
// Start by asking for a few recent block hashes as this will cover a majority of reorgs
// Failing that, back off exponentially
int step = INITIAL_BLOCK_STEP;
@@ -584,8 +647,12 @@ public class Synchronizer {
blockSummariesBatch = this.getBlockSummaries(peer, testSignature, step);
if (blockSummariesBatch == null) {
if (infoLogWhenNotFound)
LOGGER.info(String.format("Error while trying to find common block with peer %s", peer));
else
LOGGER.debug(String.format("Error while trying to find common block with peer %s", peer));
// No response - give up this time
LOGGER.info(String.format("Error while trying to find common block with peer %s", peer));
return SynchronizationResult.NO_REPLY;
}
@@ -703,7 +770,7 @@ public class Synchronizer {
populateBlockSummariesMinterLevels(repository, ourBlockSummaries);
populateBlockSummariesMinterLevels(repository, peerBlockSummaries);
final int mutualHeight = commonBlockHeight - 1 + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
final int mutualHeight = commonBlockHeight + Math.min(ourBlockSummaries.size(), peerBlockSummaries.size());
// Calculate cumulative chain weights of both blockchain subsets, from common block to highest mutual block.
BigInteger ourChainWeight = Block.calcChainWeight(commonBlockHeight, commonBlockSig, ourBlockSummaries, mutualHeight);
@@ -733,6 +800,7 @@ public class Synchronizer {
LOGGER.debug(() -> String.format("Fetching peer %s chain from height %d, sig %.8s", peer, commonBlockHeight, commonBlockSig58));
final int maxRetries = Settings.getInstance().getMaxRetries();
// Overall plan: fetch peer's blocks first, then orphan, then apply
@@ -771,17 +839,29 @@ public class Synchronizer {
if (cachedCommonBlockData != null)
cachedCommonBlockData.setBlockSummariesAfterCommonBlock(null);
// If we have already received RECENT blocks from this peer, go ahead and apply them
// If we have already received newer blocks from this peer that what we have already, go ahead and apply them
if (peerBlocks.size() > 0) {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final Block peerLatestBlock = peerBlocks.get(peerBlocks.size() - 1);
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (peerLatestBlock != null && minLatestBlockTimestamp != null
&& peerLatestBlock.getBlockData().getTimestamp() > minLatestBlockTimestamp) {
LOGGER.debug("Newly received blocks are recent, so we will apply them");
break;
if (ourLatestBlockData != null && peerLatestBlock != null && minLatestBlockTimestamp != null) {
// If our latest block is very old....
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
// ... and we have received a block that is more recent than our latest block ...
if (peerLatestBlock.getBlockData().getTimestamp() > ourLatestBlockData.getTimestamp()) {
// ... then apply the blocks, as it takes us a step forward.
// This is particularly useful when starting up a node that was on a small fork when it was last shut down.
// In these cases, we now allow the node to sync forward, and get onto the main chain again.
// Without this, we would require that the node syncs ENTIRELY with this peer,
// and any problems downloading a block would cause all progress to be lost.
LOGGER.debug(String.format("Newly received blocks are %d ms newer than our latest block - so we will apply them", peerLatestBlock.getBlockData().getTimestamp() - ourLatestBlockData.getTimestamp()));
break;
}
}
}
}
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated state
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated or incomplete state
return SynchronizationResult.NO_REPLY;
}
@@ -804,19 +884,30 @@ public class Synchronizer {
LOGGER.info(String.format("Peer %s failed to respond with block for height %d, sig %.8s", peer,
nextHeight, Base58.encode(nextPeerSignature)));
if (retryCount >= MAXIMUM_RETRIES) {
// If we have already received RECENT blocks from this peer, go ahead and apply them
if (retryCount >= maxRetries) {
// If we have already received newer blocks from this peer that what we have already, go ahead and apply them
if (peerBlocks.size() > 0) {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final Block peerLatestBlock = peerBlocks.get(peerBlocks.size() - 1);
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (peerLatestBlock != null && minLatestBlockTimestamp != null
&& peerLatestBlock.getBlockData().getTimestamp() > minLatestBlockTimestamp) {
LOGGER.debug("Newly received blocks are recent, so we will apply them");
break;
if (ourLatestBlockData != null && peerLatestBlock != null && minLatestBlockTimestamp != null) {
// If our latest block is very old....
if (ourLatestBlockData.getTimestamp() < minLatestBlockTimestamp) {
// ... and we have received a block that is more recent than our latest block ...
if (peerLatestBlock.getBlockData().getTimestamp() > ourLatestBlockData.getTimestamp()) {
// ... then apply the blocks, as it takes us a step forward.
// This is particularly useful when starting up a node that was on a small fork when it was last shut down.
// In these cases, we now allow the node to sync forward, and get onto the main chain again.
// Without this, we would require that the node syncs ENTIRELY with this peer,
// and any problems downloading a block would cause all progress to be lost.
LOGGER.debug(String.format("Newly received blocks are %d ms newer than our latest block - so we will apply them", peerLatestBlock.getBlockData().getTimestamp() - ourLatestBlockData.getTimestamp()));
break;
}
}
}
}
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated state
// Otherwise, give up and move on to the next peer, to avoid putting our chain into an outdated or incomplete state
return SynchronizationResult.NO_REPLY;
} else {
@@ -824,9 +915,9 @@ public class Synchronizer {
peerBlockSignatures.clear();
numberSignaturesRequired = peerHeight - height;
// Retry until retryCount reaches MAXIMUM_RETRIES
// Retry until retryCount reaches maxRetries
retryCount++;
int triesRemaining = MAXIMUM_RETRIES - retryCount;
int triesRemaining = maxRetries - retryCount;
LOGGER.info(String.format("Re-issuing request to peer %s (%d attempt%s remaining)", peer, triesRemaining, (triesRemaining != 1 ? "s" : "")));
continue;
}
@@ -858,6 +949,7 @@ public class Synchronizer {
// Unwind to common block (unless common block is our latest block)
int ourHeight = ourInitialHeight;
LOGGER.debug(String.format("Orphaning blocks back to common block height %d, sig %.8s. Our height: %d", commonBlockHeight, commonBlockSig58, ourHeight));
int reorgSize = ourHeight - commonBlockHeight;
BlockData orphanBlockData = repository.getBlockRepository().fromHeight(ourInitialHeight);
while (ourHeight > commonBlockHeight) {
@@ -906,6 +998,7 @@ public class Synchronizer {
Controller.getInstance().onNewBlock(newBlock.getBlockData());
}
this.lastReorgSize = reorgSize;
return SynchronizationResult.OK;
}

View File

@@ -0,0 +1,92 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class AtStatesPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(AtStatesPruner.class);
@Override
public void run() {
Thread.currentThread().setName("AT States pruner");
if (!Settings.getInstance().isPruningEnabled()) {
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getATRepository().getAtPruneHeight();
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getAtStatesPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing())
continue;
// Prune AT states for all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
final int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getAtStatesPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight)
continue;
LOGGER.debug(String.format("Pruning AT states between blocks %d and %d...", pruneStartHeight, upperPruneHeight));
int numAtStatesPruned = repository.getATRepository().pruneAtStates(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
int numAtStateDataRowsTrimmed = repository.getATRepository().trimAtStates(
pruneStartHeight, upperPruneHeight, Settings.getInstance().getAtStatesTrimLimit());
repository.saveChanges();
if (numAtStatesPruned > 0 || numAtStateDataRowsTrimmed > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Pruned %d AT state%s between blocks %d and %d",
numAtStatesPruned, (numAtStatesPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getATRepository().setAtPruneHeight(pruneStartHeight);
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
Thread.sleep(5*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune AT states: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,7 +1,8 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -20,7 +21,7 @@ public class AtStatesTrimmer implements Runnable {
try (final Repository repository = RepositoryManager.getRepository()) {
int trimStartHeight = repository.getATRepository().getAtTrimHeight();
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
while (!Controller.isStopping()) {
@@ -62,7 +63,7 @@ public class AtStatesTrimmer implements Runnable {
if (upperTrimmableHeight > upperBatchHeight) {
trimStartHeight = upperBatchHeight;
repository.getATRepository().setAtTrimHeight(trimStartHeight);
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalTrimStartHeight = trimStartHeight;

View File

@@ -0,0 +1,86 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class BlockPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockPruner.class);
@Override
public void run() {
Thread.currentThread().setName("Block pruner");
if (!Settings.getInstance().isPruningEnabled()) {
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getBlockPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing())
continue;
// Prune all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
final int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight) {
continue;
}
LOGGER.debug(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getBlockRepository().setBlockPruneHeight(pruneStartHeight);
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping block base prune height to %d", finalPruneStartHeight));
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
Thread.sleep(10*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune blocks: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,8 +1,9 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.BlockChain;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;

View File

@@ -0,0 +1,87 @@
package org.qortal.controller.repository;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.utils.DaemonThreadFactory;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class PruneManager {
private static PruneManager instance;
private boolean pruningEnabled = Settings.getInstance().isPruningEnabled();
private int pruneBlockLimit = Settings.getInstance().getPruneBlockLimit();
private ExecutorService executorService;
private PruneManager() {
}
public static synchronized PruneManager getInstance() {
if (instance == null)
instance = new PruneManager();
return instance;
}
public void start() {
this.executorService = Executors.newCachedThreadPool(new DaemonThreadFactory());
// Don't allow both the pruner and the trimmer to run at the same time.
// In pruning mode, we are already deleting far more than we would when trimming.
// In non-pruning mode, we still need to trim to keep the non-essential data
// out of the database. There isn't a case where both are needed at once.
// If we ever do need to enable both at once, be very careful with the AT state
// trimming, since both currently rely on having exclusive access to the
// prepareForAtStateTrimming() method. For both trimming and pruning to take place
// at once, we would need to synchronize this method in a way that both can't
// call it at the same time, as otherwise active ATs would be pruned/trimmed when
// they should have been kept.
if (Settings.getInstance().isPruningEnabled()) {
// Pruning enabled - start the pruning processes
this.executorService.execute(new AtStatesPruner());
this.executorService.execute(new BlockPruner());
}
else {
// Pruning disabled - use trimming instead
this.executorService.execute(new AtStatesTrimmer());
this.executorService.execute(new OnlineAccountsSignaturesTrimmer());
}
}
public void stop() {
this.executorService.shutdownNow();
try {
this.executorService.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
}
}
public boolean isBlockPruned(int height, Repository repository) throws DataException {
if (!this.pruningEnabled) {
return false;
}
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null) {
throw new DataException("Unable to determine chain tip when checking if a block is pruned");
}
final int ourLatestHeight = chainTip.getHeight();
final int latestUnprunedHeight = ourLatestHeight - this.pruneBlockLimit;
return (height < latestUnprunedHeight);
}
}

View File

@@ -383,15 +383,6 @@ public class BitcoinACCTv1TradeBot implements AcctTradeBot {
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
}
return;
}
@@ -1042,7 +1033,7 @@ public class BitcoinACCTv1TradeBot implements AcctTradeBot {
return;
}
byte[] secretA = BitcoinACCTv1.findSecretA(repository, crossChainTradeData);
byte[] secretA = BitcoinACCTv1.getInstance().findSecretA(repository, crossChainTradeData);
if (secretA == null) {
LOGGER.debug(() -> String.format("Unable to find secret-A from redeem message to AT %s?", tradeBotData.getAtAddress()));
return;

View File

@@ -0,0 +1,883 @@
package org.qortal.controller.tradebot;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bitcoinj.core.*;
import org.bitcoinj.script.Script.ScriptType;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.api.model.crosschain.TradeBotCreateRequest;
import org.qortal.asset.Asset;
import org.qortal.crosschain.*;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.MessageTransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.TransformationException;
import org.qortal.transform.transaction.DeployAtTransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static java.util.Arrays.stream;
import static java.util.stream.Collectors.toMap;
/**
* Performing cross-chain trading steps on behalf of user.
* <p>
* We deal with three different independent state-spaces here:
* <ul>
* <li>Qortal blockchain</li>
* <li>Foreign blockchain</li>
* <li>Trade-bot entries</li>
* </ul>
*/
public class DogecoinACCTv1TradeBot implements AcctTradeBot {
private static final Logger LOGGER = LogManager.getLogger(DogecoinACCTv1TradeBot.class);
public enum State implements TradeBot.StateNameAndValueSupplier {
BOB_WAITING_FOR_AT_CONFIRM(10, false, false),
BOB_WAITING_FOR_MESSAGE(15, true, true),
BOB_WAITING_FOR_AT_REDEEM(25, true, true),
BOB_DONE(30, false, false),
BOB_REFUNDED(35, false, false),
ALICE_WAITING_FOR_AT_LOCK(85, true, true),
ALICE_DONE(95, false, false),
ALICE_REFUNDING_A(105, true, true),
ALICE_REFUNDED(110, false, false);
private static final Map<Integer, State> map = stream(State.values()).collect(toMap(state -> state.value, state -> state));
public final int value;
public final boolean requiresAtData;
public final boolean requiresTradeData;
State(int value, boolean requiresAtData, boolean requiresTradeData) {
this.value = value;
this.requiresAtData = requiresAtData;
this.requiresTradeData = requiresTradeData;
}
public static State valueOf(int value) {
return map.get(value);
}
@Override
public String getState() {
return this.name();
}
@Override
public int getStateValue() {
return this.value;
}
}
/** Maximum time Bob waits for his AT creation transaction to be confirmed into a block. (milliseconds) */
private static final long MAX_AT_CONFIRMATION_PERIOD = 24 * 60 * 60 * 1000L; // ms
private static DogecoinACCTv1TradeBot instance;
private final List<String> endStates = Arrays.asList(State.BOB_DONE, State.BOB_REFUNDED, State.ALICE_DONE, State.ALICE_REFUNDING_A, State.ALICE_REFUNDED).stream()
.map(State::name)
.collect(Collectors.toUnmodifiableList());
private DogecoinACCTv1TradeBot() {
}
public static synchronized DogecoinACCTv1TradeBot getInstance() {
if (instance == null)
instance = new DogecoinACCTv1TradeBot();
return instance;
}
@Override
public List<String> getEndStates() {
return this.endStates;
}
/**
* Creates a new trade-bot entry from the "Bob" viewpoint, i.e. OFFERing QORT in exchange for DOGE.
* <p>
* Generates:
* <ul>
* <li>new 'trade' private key</li>
* </ul>
* Derives:
* <ul>
* <li>'native' (as in Qortal) public key, public key hash, address (starting with Q)</li>
* <li>'foreign' (as in Dogecoin) public key, public key hash</li>
* </ul>
* A Qortal AT is then constructed including the following as constants in the 'data segment':
* <ul>
* <li>'native'/Qortal 'trade' address - used as a MESSAGE contact</li>
* <li>'foreign'/Dogecoin public key hash - used by Alice's P2SH scripts to allow redeem</li>
* <li>QORT amount on offer by Bob</li>
* <li>DOGE amount expected in return by Bob (from Alice)</li>
* <li>trading timeout, in case things go wrong and everyone needs to refund</li>
* </ul>
* Returns a DEPLOY_AT transaction that needs to be signed and broadcast to the Qortal network.
* <p>
* Trade-bot will wait for Bob's AT to be deployed before taking next step.
* <p>
* @param repository
* @param tradeBotCreateRequest
* @return raw, unsigned DEPLOY_AT transaction
* @throws DataException
*/
public byte[] createTrade(Repository repository, TradeBotCreateRequest tradeBotCreateRequest) throws DataException {
byte[] tradePrivateKey = TradeBot.generateTradePrivateKey();
byte[] tradeNativePublicKey = TradeBot.deriveTradeNativePublicKey(tradePrivateKey);
byte[] tradeNativePublicKeyHash = Crypto.hash160(tradeNativePublicKey);
String tradeNativeAddress = Crypto.toAddress(tradeNativePublicKey);
byte[] tradeForeignPublicKey = TradeBot.deriveTradeForeignPublicKey(tradePrivateKey);
byte[] tradeForeignPublicKeyHash = Crypto.hash160(tradeForeignPublicKey);
// Convert Dogecoin receiving address into public key hash (we only support P2PKH at this time)
Address dogecoinReceivingAddress;
try {
dogecoinReceivingAddress = Address.fromString(Dogecoin.getInstance().getNetworkParameters(), tradeBotCreateRequest.receivingAddress);
} catch (AddressFormatException e) {
throw new DataException("Unsupported Dogecoin receiving address: " + tradeBotCreateRequest.receivingAddress);
}
if (dogecoinReceivingAddress.getOutputScriptType() != ScriptType.P2PKH)
throw new DataException("Unsupported Dogecoin receiving address: " + tradeBotCreateRequest.receivingAddress);
byte[] dogecoinReceivingAccountInfo = dogecoinReceivingAddress.getHash();
PublicKeyAccount creator = new PublicKeyAccount(repository, tradeBotCreateRequest.creatorPublicKey);
// Deploy AT
long timestamp = NTP.getTime();
byte[] reference = creator.getLastReference();
long fee = 0L;
byte[] signature = null;
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, Group.NO_GROUP, reference, creator.getPublicKey(), fee, signature);
String name = "QORT/DOGE ACCT";
String description = "QORT/DOGE cross-chain trade";
String aTType = "ACCT";
String tags = "ACCT QORT DOGE";
byte[] creationBytes = DogecoinACCTv1.buildQortalAT(tradeNativeAddress, tradeForeignPublicKeyHash, tradeBotCreateRequest.qortAmount,
tradeBotCreateRequest.foreignAmount, tradeBotCreateRequest.tradeTimeout);
long amount = tradeBotCreateRequest.fundingQortAmount;
DeployAtTransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, aTType, tags, creationBytes, amount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
DeployAtTransaction.ensureATAddress(deployAtTransactionData);
String atAddress = deployAtTransactionData.getAtAddress();
TradeBotData tradeBotData = new TradeBotData(tradePrivateKey, DogecoinACCTv1.NAME,
State.BOB_WAITING_FOR_AT_CONFIRM.name(), State.BOB_WAITING_FOR_AT_CONFIRM.value,
creator.getAddress(), atAddress, timestamp, tradeBotCreateRequest.qortAmount,
tradeNativePublicKey, tradeNativePublicKeyHash, tradeNativeAddress,
null, null,
SupportedBlockchain.DOGECOIN.name(),
tradeForeignPublicKey, tradeForeignPublicKeyHash,
tradeBotCreateRequest.foreignAmount, null, null, null, dogecoinReceivingAccountInfo);
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Built AT %s. Waiting for deployment", atAddress));
// Attempt to backup the trade bot data
TradeBot.backupTradeBotData(repository);
// Return to user for signing and broadcast as we don't have their Qortal private key
try {
return DeployAtTransactionTransformer.toBytes(deployAtTransactionData);
} catch (TransformationException e) {
throw new DataException("Failed to transform DEPLOY_AT transaction?", e);
}
}
/**
* Creates a trade-bot entry from the 'Alice' viewpoint, i.e. matching DOGE to an existing offer.
* <p>
* Requires a chosen trade offer from Bob, passed by <tt>crossChainTradeData</tt>
* and access to a Dogecoin wallet via <tt>xprv58</tt>.
* <p>
* The <tt>crossChainTradeData</tt> contains the current trade offer state
* as extracted from the AT's data segment.
* <p>
* Access to a funded wallet is via a Dogecoin BIP32 hierarchical deterministic key,
* passed via <tt>xprv58</tt>.
* <b>This key will be stored in your node's database</b>
* to allow trade-bot to create/fund the necessary P2SH transactions!
* However, due to the nature of BIP32 keys, it is possible to give the trade-bot
* only a subset of wallet access (see BIP32 for more details).
* <p>
* As an example, the xprv58 can be extract from a <i>legacy, password-less</i>
* Electrum wallet by going to the console tab and entering:<br>
* <tt>wallet.keystore.xprv</tt><br>
* which should result in a base58 string starting with either 'xprv' (for Dogecoin main-net)
* or 'tprv' for (Dogecoin test-net).
* <p>
* It is envisaged that the value in <tt>xprv58</tt> will actually come from a Qortal-UI-managed wallet.
* <p>
* If sufficient funds are available, <b>this method will actually fund the P2SH-A</b>
* with the Dogecoin amount expected by 'Bob'.
* <p>
* If the Dogecoin transaction is successfully broadcast to the network then
* we also send a MESSAGE to Bob's trade-bot to let them know.
* <p>
* The trade-bot entry is saved to the repository and the cross-chain trading process commences.
* <p>
* @param repository
* @param crossChainTradeData chosen trade OFFER that Alice wants to match
* @param xprv58 funded wallet xprv in base58
* @return true if P2SH-A funding transaction successfully broadcast to Dogecoin network, false otherwise
* @throws DataException
*/
public ResponseResult startResponse(Repository repository, ATData atData, ACCT acct, CrossChainTradeData crossChainTradeData, String xprv58, String receivingAddress) throws DataException {
byte[] tradePrivateKey = TradeBot.generateTradePrivateKey();
byte[] secretA = TradeBot.generateSecret();
byte[] hashOfSecretA = Crypto.hash160(secretA);
byte[] tradeNativePublicKey = TradeBot.deriveTradeNativePublicKey(tradePrivateKey);
byte[] tradeNativePublicKeyHash = Crypto.hash160(tradeNativePublicKey);
String tradeNativeAddress = Crypto.toAddress(tradeNativePublicKey);
byte[] tradeForeignPublicKey = TradeBot.deriveTradeForeignPublicKey(tradePrivateKey);
byte[] tradeForeignPublicKeyHash = Crypto.hash160(tradeForeignPublicKey);
byte[] receivingPublicKeyHash = Base58.decode(receivingAddress); // Actually the whole address, not just PKH
// We need to generate lockTime-A: add tradeTimeout to now
long now = NTP.getTime();
int lockTimeA = crossChainTradeData.tradeTimeout * 60 + (int) (now / 1000L);
TradeBotData tradeBotData = new TradeBotData(tradePrivateKey, DogecoinACCTv1.NAME,
State.ALICE_WAITING_FOR_AT_LOCK.name(), State.ALICE_WAITING_FOR_AT_LOCK.value,
receivingAddress, crossChainTradeData.qortalAtAddress, now, crossChainTradeData.qortAmount,
tradeNativePublicKey, tradeNativePublicKeyHash, tradeNativeAddress,
secretA, hashOfSecretA,
SupportedBlockchain.DOGECOIN.name(),
tradeForeignPublicKey, tradeForeignPublicKeyHash,
crossChainTradeData.expectedForeignAmount, xprv58, null, lockTimeA, receivingPublicKeyHash);
// Attempt to backup the trade bot data
TradeBot.backupTradeBotData(repository);
// Check we have enough funds via xprv58 to fund P2SH to cover expectedForeignAmount
long p2shFee;
try {
p2shFee = Dogecoin.getInstance().getP2shFee(now);
} catch (ForeignBlockchainException e) {
LOGGER.debug("Couldn't estimate Dogecoin fees?");
return ResponseResult.NETWORK_ISSUE;
}
// Fee for redeem/refund is subtracted from P2SH-A balance.
// Do not include fee for funding transaction as this is covered by buildSpend()
long amountA = crossChainTradeData.expectedForeignAmount + p2shFee /*redeeming/refunding P2SH-A*/;
// P2SH-A to be funded
byte[] redeemScriptBytes = BitcoinyHTLC.buildScript(tradeForeignPublicKeyHash, lockTimeA, crossChainTradeData.creatorForeignPKH, hashOfSecretA);
String p2shAddress = Dogecoin.getInstance().deriveP2shAddress(redeemScriptBytes);
// Build transaction for funding P2SH-A
Transaction p2shFundingTransaction = Dogecoin.getInstance().buildSpend(tradeBotData.getForeignKey(), p2shAddress, amountA);
if (p2shFundingTransaction == null) {
LOGGER.debug("Unable to build P2SH-A funding transaction - lack of funds?");
return ResponseResult.BALANCE_ISSUE;
}
try {
Dogecoin.getInstance().broadcastTransaction(p2shFundingTransaction);
} catch (ForeignBlockchainException e) {
// We couldn't fund P2SH-A at this time
LOGGER.debug("Couldn't broadcast P2SH-A funding transaction?");
return ResponseResult.NETWORK_ISSUE;
}
// Attempt to send MESSAGE to Bob's Qortal trade address
byte[] messageData = DogecoinACCTv1.buildOfferMessage(tradeBotData.getTradeForeignPublicKeyHash(), tradeBotData.getHashOfSecret(), tradeBotData.getLockTimeA());
String messageRecipient = crossChainTradeData.qortalCreatorTradeAddress;
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, messageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to Bob's trade-bot %s: %s", messageRecipient, result.name()));
return ResponseResult.NETWORK_ISSUE;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Funding P2SH-A %s. Messaged Bob. Waiting for AT-lock", p2shAddress));
return ResponseResult.OK;
}
@Override
public boolean canDelete(Repository repository, TradeBotData tradeBotData) throws DataException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null)
return true;
// If the AT doesn't exist then we might as well let the user tidy up
if (!repository.getATRepository().exists(tradeBotData.getAtAddress()))
return true;
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
case ALICE_DONE:
case BOB_DONE:
case ALICE_REFUNDED:
case BOB_REFUNDED:
return true;
default:
return false;
}
}
@Override
public void progress(Repository repository, TradeBotData tradeBotData) throws DataException, ForeignBlockchainException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null) {
LOGGER.info(() -> String.format("Trade-bot entry for AT %s has invalid state?", tradeBotData.getAtAddress()));
return;
}
ATData atData = null;
CrossChainTradeData tradeData = null;
if (tradeBotState.requiresAtData) {
// Attempt to fetch AT data
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
return;
}
if (tradeBotState.requiresTradeData) {
tradeData = DogecoinACCTv1.getInstance().populateTradeData(repository, atData);
if (tradeData == null) {
LOGGER.warn(() -> String.format("Unable to fetch ACCT trade data for AT %s from repository", tradeBotData.getAtAddress()));
return;
}
}
}
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
handleBobWaitingForAtConfirm(repository, tradeBotData);
break;
case BOB_WAITING_FOR_MESSAGE:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleBobWaitingForMessage(repository, tradeBotData, atData, tradeData);
break;
case ALICE_WAITING_FOR_AT_LOCK:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleAliceWaitingForAtLock(repository, tradeBotData, atData, tradeData);
break;
case BOB_WAITING_FOR_AT_REDEEM:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleBobWaitingForAtRedeem(repository, tradeBotData, atData, tradeData);
break;
case ALICE_DONE:
case BOB_DONE:
break;
case ALICE_REFUNDING_A:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleAliceRefundingP2shA(repository, tradeBotData, atData, tradeData);
break;
case ALICE_REFUNDED:
case BOB_REFUNDED:
break;
}
}
/**
* Trade-bot is waiting for Bob's AT to deploy.
* <p>
* If AT is deployed, then trade-bot's next step is to wait for MESSAGE from Alice.
*/
private void handleBobWaitingForAtConfirm(Repository repository, TradeBotData tradeBotData) throws DataException {
if (!repository.getATRepository().exists(tradeBotData.getAtAddress())) {
if (NTP.getTime() - tradeBotData.getTimestamp() <= MAX_AT_CONFIRMATION_PERIOD)
return;
// We've waited ages for AT to be confirmed into a block but something has gone awry.
// After this long we assume transaction loss so give up with trade-bot entry too.
tradeBotData.setState(State.BOB_REFUNDED.name());
tradeBotData.setStateValue(State.BOB_REFUNDED.value);
tradeBotData.setTimestamp(NTP.getTime());
// We delete trade-bot entry here instead of saving, hence not using updateTradeBotState()
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
LOGGER.info(() -> String.format("AT %s never confirmed. Giving up on trade", tradeBotData.getAtAddress()));
TradeBot.notifyStateChange(tradeBotData);
return;
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_WAITING_FOR_MESSAGE,
() -> String.format("AT %s confirmed ready. Waiting for trade message", tradeBotData.getAtAddress()));
}
/**
* Trade-bot is waiting for MESSAGE from Alice's trade-bot, containing Alice's trade info.
* <p>
* It's possible Bob has cancelling his trade offer, receiving an automatic QORT refund,
* in which case trade-bot is done with this specific trade and finalizes on refunded state.
* <p>
* Assuming trade is still on offer, trade-bot checks the contents of MESSAGE from Alice's trade-bot.
* <p>
* Details from Alice are used to derive P2SH-A address and this is checked for funding balance.
* <p>
* Assuming P2SH-A has at least expected Dogecoin balance,
* Bob's trade-bot constructs a zero-fee, PoW MESSAGE to send to Bob's AT with more trade details.
* <p>
* On processing this MESSAGE, Bob's AT should switch into 'TRADE' mode and only trade with Alice.
* <p>
* Trade-bot's next step is to wait for Alice to redeem the AT, which will allow Bob to
* extract secret-A needed to redeem Alice's P2SH.
* @throws ForeignBlockchainException
*/
private void handleBobWaitingForMessage(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// If AT has finished then Bob likely cancelled his trade offer
if (atData.getIsFinished()) {
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_REFUNDED,
() -> String.format("AT %s cancelled - trading aborted", tradeBotData.getAtAddress()));
return;
}
Dogecoin dogecoin = Dogecoin.getInstance();
String address = tradeBotData.getTradeNativeAddress();
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(null, address, null, null, null);
for (MessageTransactionData messageTransactionData : messageTransactionsData) {
if (messageTransactionData.isText())
continue;
// We're expecting: HASH160(secret-A), Alice's Dogecoin pubkeyhash and lockTime-A
byte[] messageData = messageTransactionData.getData();
DogecoinACCTv1.OfferMessageData offerMessageData = DogecoinACCTv1.extractOfferMessageData(messageData);
if (offerMessageData == null)
continue;
byte[] aliceForeignPublicKeyHash = offerMessageData.partnerDogecoinPKH;
byte[] hashOfSecretA = offerMessageData.hashOfSecretA;
int lockTimeA = (int) offerMessageData.lockTimeA;
long messageTimestamp = messageTransactionData.getTimestamp();
int refundTimeout = DogecoinACCTv1.calcRefundTimeout(messageTimestamp, lockTimeA);
// Determine P2SH-A address and confirm funded
byte[] redeemScriptA = BitcoinyHTLC.buildScript(aliceForeignPublicKeyHash, lockTimeA, tradeBotData.getTradeForeignPublicKeyHash(), hashOfSecretA);
String p2shAddressA = dogecoin.deriveP2shAddress(redeemScriptA);
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Dogecoin.getInstance().getP2shFee(feeTimestamp);
final long minimumAmountA = tradeBotData.getForeignAmount() + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(dogecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// There might be another MESSAGE from someone else with an actually funded P2SH-A...
continue;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// We've already redeemed this?
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_DONE,
() -> String.format("P2SH-A %s already spent? Assuming trade complete", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
// This P2SH-A is burnt, but there might be another MESSAGE from someone else with an actually funded P2SH-A...
continue;
case FUNDED:
// Fall-through out of switch...
break;
}
// Good to go - send MESSAGE to AT
String aliceNativeAddress = Crypto.toAddress(messageTransactionData.getCreatorPublicKey());
// Build outgoing message, padding each part to 32 bytes to make it easier for AT to consume
byte[] outgoingMessageData = DogecoinACCTv1.buildTradeMessage(aliceNativeAddress, aliceForeignPublicKeyHash, hashOfSecretA, lockTimeA, refundTimeout);
String messageRecipient = tradeBotData.getAtAddress();
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, outgoingMessageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction outgoingMessageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, outgoingMessageData, false, false);
outgoingMessageTransaction.computeNonce();
outgoingMessageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = outgoingMessageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to AT %s: %s", messageRecipient, result.name()));
return;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_WAITING_FOR_AT_REDEEM,
() -> String.format("Locked AT %s to %s. Waiting for AT redeem", tradeBotData.getAtAddress(), aliceNativeAddress));
return;
}
}
/**
* Trade-bot is waiting for Bob's AT to switch to TRADE mode and lock trade to Alice only.
* <p>
* It's possible that Bob has cancelled his trade offer in the mean time, or that somehow
* this process has taken so long that we've reached P2SH-A's locktime, or that someone else
* has managed to trade with Bob. In any of these cases, trade-bot switches to begin the refunding process.
* <p>
* Assuming Bob's AT is locked to Alice, trade-bot checks AT's state data to make sure it is correct.
* <p>
* If all is well, trade-bot then redeems AT using Alice's secret-A, releasing Bob's QORT to Alice.
* <p>
* In revealing a valid secret-A, Bob can then redeem the DOGE funds from P2SH-A.
* <p>
* @throws ForeignBlockchainException
*/
private void handleAliceWaitingForAtLock(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
if (aliceUnexpectedState(repository, tradeBotData, atData, crossChainTradeData))
return;
Dogecoin dogecoin = Dogecoin.getInstance();
int lockTimeA = tradeBotData.getLockTimeA();
// Refund P2SH-A if we've passed lockTime-A
if (NTP.getTime() >= lockTimeA * 1000L) {
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTimeA, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = dogecoin.deriveP2shAddress(redeemScriptA);
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Dogecoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(dogecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
case FUNDED:
break;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Already redeemed?
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("P2SH-A %s already spent? Assuming trade completed", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDED,
() -> String.format("P2SH-A %s already refunded. Trade aborted", p2shAddressA));
return;
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> atData.getIsFinished()
? String.format("AT %s cancelled. Refunding P2SH-A %s - aborting trade", tradeBotData.getAtAddress(), p2shAddressA)
: String.format("LockTime-A reached, refunding P2SH-A %s - aborting trade", p2shAddressA));
return;
}
// We're waiting for AT to be in TRADE mode
if (crossChainTradeData.mode != AcctMode.TRADING)
return;
// AT is in TRADE mode and locked to us as checked by aliceUnexpectedState() above
// Find our MESSAGE to AT from previous state
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(tradeBotData.getTradeNativePublicKey(),
crossChainTradeData.qortalCreatorTradeAddress, null, null, null);
if (messageTransactionsData == null || messageTransactionsData.isEmpty()) {
LOGGER.warn(() -> String.format("Unable to find our message to trade creator %s?", crossChainTradeData.qortalCreatorTradeAddress));
return;
}
long recipientMessageTimestamp = messageTransactionsData.get(0).getTimestamp();
int refundTimeout = DogecoinACCTv1.calcRefundTimeout(recipientMessageTimestamp, lockTimeA);
// Our calculated refundTimeout should match AT's refundTimeout
if (refundTimeout != crossChainTradeData.refundTimeout) {
LOGGER.debug(() -> String.format("Trade AT refundTimeout '%d' doesn't match our refundTimeout '%d'", crossChainTradeData.refundTimeout, refundTimeout));
// We'll eventually refund
return;
}
// We're good to redeem AT
// Send 'redeem' MESSAGE to AT using both secret
byte[] secretA = tradeBotData.getSecret();
String qortalReceivingAddress = Base58.encode(tradeBotData.getReceivingAccountInfo()); // Actually contains whole address, not just PKH
byte[] messageData = DogecoinACCTv1.buildRedeemMessage(secretA, qortalReceivingAddress);
String messageRecipient = tradeBotData.getAtAddress();
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, messageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// Reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to AT %s: %s", messageRecipient, result.name()));
return;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("Redeeming AT %s. Funds should arrive at %s",
tradeBotData.getAtAddress(), qortalReceivingAddress));
}
/**
* Trade-bot is waiting for Alice to redeem Bob's AT, thus revealing secret-A which is required to spend the DOGE funds from P2SH-A.
* <p>
* It's possible that Bob's AT has reached its trading timeout and automatically refunded QORT back to Bob. In which case,
* trade-bot is done with this specific trade and finalizes in refunded state.
* <p>
* Assuming trade-bot can extract a valid secret-A from Alice's MESSAGE then trade-bot uses that to redeem the DOGE funds from P2SH-A
* to Bob's 'foreign'/Dogecoin trade legacy-format address, as derived from trade private key.
* <p>
* (This could potentially be 'improved' to send DOGE to any address of Bob's choosing by changing the transaction output).
* <p>
* If trade-bot successfully broadcasts the transaction, then this specific trade is done.
* @throws ForeignBlockchainException
*/
private void handleBobWaitingForAtRedeem(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// AT should be 'finished' once Alice has redeemed QORT funds
if (!atData.getIsFinished())
// Not finished yet
return;
// If AT is REFUNDED or CANCELLED then something has gone wrong
if (crossChainTradeData.mode == AcctMode.REFUNDED || crossChainTradeData.mode == AcctMode.CANCELLED) {
// Alice hasn't redeemed the QORT, so there is no point in trying to redeem the DOGE
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_REFUNDED,
() -> String.format("AT %s has auto-refunded - trade aborted", tradeBotData.getAtAddress()));
return;
}
byte[] secretA = DogecoinACCTv1.getInstance().findSecretA(repository, crossChainTradeData);
if (secretA == null) {
LOGGER.debug(() -> String.format("Unable to find secret-A from redeem message to AT %s?", tradeBotData.getAtAddress()));
return;
}
// Use secret-A to redeem P2SH-A
Dogecoin dogecoin = Dogecoin.getInstance();
byte[] receivingAccountInfo = tradeBotData.getReceivingAccountInfo();
int lockTimeA = crossChainTradeData.lockTimeA;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(crossChainTradeData.partnerForeignPKH, lockTimeA, crossChainTradeData.creatorForeignPKH, crossChainTradeData.hashOfSecretA);
String p2shAddressA = dogecoin.deriveP2shAddress(redeemScriptA);
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Dogecoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(dogecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// P2SH-A suddenly not funded? Our best bet at this point is to hope for AT auto-refund
return;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Double-check that we have redeemed P2SH-A...
break;
case REFUND_IN_PROGRESS:
case REFUNDED:
// Wait for AT to auto-refund
return;
case FUNDED: {
Coin redeemAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey redeemKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = dogecoin.getUnspentOutputs(p2shAddressA);
Transaction p2shRedeemTransaction = BitcoinyHTLC.buildRedeemTransaction(dogecoin.getNetworkParameters(), redeemAmount, redeemKey,
fundingOutputs, redeemScriptA, secretA, receivingAccountInfo);
dogecoin.broadcastTransaction(p2shRedeemTransaction);
break;
}
}
String receivingAddress = dogecoin.pkhToAddress(receivingAccountInfo);
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_DONE,
() -> String.format("P2SH-A %s redeemed. Funds should arrive at %s", tradeBotData.getAtAddress(), receivingAddress));
}
/**
* Trade-bot is attempting to refund P2SH-A.
* @throws ForeignBlockchainException
*/
private void handleAliceRefundingP2shA(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
int lockTimeA = tradeBotData.getLockTimeA();
// We can't refund P2SH-A until lockTime-A has passed
if (NTP.getTime() <= lockTimeA * 1000L)
return;
Dogecoin dogecoin = Dogecoin.getInstance();
// We can't refund P2SH-A until median block time has passed lockTime-A (see BIP113)
int medianBlockTime = dogecoin.getMedianBlockTime();
if (medianBlockTime <= lockTimeA)
return;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTimeA, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = dogecoin.deriveP2shAddress(redeemScriptA);
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Dogecoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(dogecoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// Still waiting for P2SH-A to be funded...
return;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Too late!
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("P2SH-A %s already spent!", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
break;
case FUNDED:{
Coin refundAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey refundKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = dogecoin.getUnspentOutputs(p2shAddressA);
// Determine receive address for refund
String receiveAddress = dogecoin.getUnusedReceiveAddress(tradeBotData.getForeignKey());
Address receiving = Address.fromString(dogecoin.getNetworkParameters(), receiveAddress);
Transaction p2shRefundTransaction = BitcoinyHTLC.buildRefundTransaction(dogecoin.getNetworkParameters(), refundAmount, refundKey,
fundingOutputs, redeemScriptA, lockTimeA, receiving.getHash());
dogecoin.broadcastTransaction(p2shRefundTransaction);
break;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDED,
() -> String.format("LockTime-A reached. Refunded P2SH-A %s. Trade aborted", p2shAddressA));
}
/**
* Returns true if Alice finds AT unexpectedly cancelled, refunded, redeemed or locked to someone else.
* <p>
* Will automatically update trade-bot state to <tt>ALICE_REFUNDING_A</tt> or <tt>ALICE_DONE</tt> as necessary.
*
* @throws DataException
* @throws ForeignBlockchainException
*/
private boolean aliceUnexpectedState(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// This is OK
if (!atData.getIsFinished() && crossChainTradeData.mode == AcctMode.OFFERING)
return false;
boolean isAtLockedToUs = tradeBotData.getTradeNativeAddress().equals(crossChainTradeData.qortalPartnerAddress);
if (!atData.getIsFinished() && crossChainTradeData.mode == AcctMode.TRADING)
if (isAtLockedToUs) {
// AT is trading with us - OK
return false;
} else {
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> String.format("AT %s trading with someone else: %s. Refunding & aborting trade", tradeBotData.getAtAddress(), crossChainTradeData.qortalPartnerAddress));
return true;
}
if (atData.getIsFinished() && crossChainTradeData.mode == AcctMode.REDEEMED && isAtLockedToUs) {
// We've redeemed already?
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("AT %s already redeemed by us. Trade completed", tradeBotData.getAtAddress()));
} else {
// Any other state is not good, so start defensive refund
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> String.format("AT %s cancelled/refunded/redeemed by someone else/invalid state. Refunding & aborting trade", tradeBotData.getAtAddress()));
}
return true;
}
private long calcFeeTimestamp(int lockTimeA, int tradeTimeout) {
return (lockTimeA - tradeTimeout * 60) * 1000L;
}
}

View File

@@ -387,15 +387,6 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
// If it has been over 24 hours since we last updated this trade-bot entry then assume AT is never coming back
// and so wipe the trade-bot entry
if (tradeBotData.getTimestamp() + MAX_AT_CONFIRMATION_PERIOD < NTP.getTime()) {
LOGGER.info(() -> String.format("AT %s has been gone for too long - deleting trade-bot entry", tradeBotData.getAtAddress()));
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
}
return;
}
@@ -725,16 +716,16 @@ public class LitecoinACCTv1TradeBot implements AcctTradeBot {
// Not finished yet
return;
// If AT is not REDEEMED then something has gone wrong
if (crossChainTradeData.mode != AcctMode.REDEEMED) {
// Not redeemed so must be refunded/cancelled
// If AT is REFUNDED or CANCELLED then something has gone wrong
if (crossChainTradeData.mode == AcctMode.REFUNDED || crossChainTradeData.mode == AcctMode.CANCELLED) {
// Alice hasn't redeemed the QORT, so there is no point in trying to redeem the LTC
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_REFUNDED,
() -> String.format("AT %s has auto-refunded - trade aborted", tradeBotData.getAtAddress()));
return;
}
byte[] secretA = LitecoinACCTv1.findSecretA(repository, crossChainTradeData);
byte[] secretA = LitecoinACCTv1.getInstance().findSecretA(repository, crossChainTradeData);
if (secretA == null) {
LOGGER.debug(() -> String.format("Unable to find secret-A from redeem message to AT %s?", tradeBotData.getAtAddress()));
return;

View File

@@ -17,11 +17,7 @@ import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.model.crosschain.TradeBotCreateRequest;
import org.qortal.controller.Controller;
import org.qortal.controller.tradebot.AcctTradeBot.ResponseResult;
import org.qortal.crosschain.ACCT;
import org.qortal.crosschain.BitcoinACCTv1;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.LitecoinACCTv1;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crosschain.*;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
@@ -80,6 +76,7 @@ public class TradeBot implements Listener {
static {
acctTradeBotSuppliers.put(BitcoinACCTv1.class, BitcoinACCTv1TradeBot::getInstance);
acctTradeBotSuppliers.put(LitecoinACCTv1.class, LitecoinACCTv1TradeBot::getInstance);
acctTradeBotSuppliers.put(DogecoinACCTv1.class, DogecoinACCTv1TradeBot::getInstance);
}
private static TradeBot instance;
@@ -272,15 +269,9 @@ public class TradeBot implements Listener {
// Attempt to backup the trade bot data. This an optional step and doesn't impact trading, so don't throw an exception on failure
try {
LOGGER.info("About to backup trade bot data...");
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lockInterruptibly();
try {
repository.exportNodeLocalData(true);
} finally {
blockchainLock.unlock();
}
} catch (InterruptedException | DataException e) {
LOGGER.info(String.format("Failed to obtain blockchain lock when exporting trade bot data: %s", e.getMessage()));
repository.exportNodeLocalData();
} catch (DataException e) {
LOGGER.info(String.format("Repository issue when exporting trade bot data: %s", e.getMessage()));
}
}

View File

@@ -20,4 +20,6 @@ public interface ACCT {
public byte[] buildCancelMessage(String creatorQortalAddress);
public byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException;
}

View File

@@ -42,35 +42,36 @@ public class Bitcoin extends Bitcoiny {
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("enode.duckdns.org", Server.ConnectionType.SSL, 50002),
new Server("electrumx.ml", Server.ConnectionType.SSL, 50002),
new Server("electrum.bitkoins.nl", Server.ConnectionType.SSL, 50512),
new Server("btc.electroncash.dk", Server.ConnectionType.SSL, 60002),
new Server("electrumx.electricnewyear.net", Server.ConnectionType.SSL, 50002),
new Server("dxm.no-ip.biz", Server.ConnectionType.TCP, 50001),
new Server("kirsche.emzy.de", Server.ConnectionType.TCP, 50001),
new Server("2AZZARITA.hopto.org", Server.ConnectionType.TCP, 50001),
new Server("xtrum.com", Server.ConnectionType.TCP, 50001),
new Server("electrum.srvmin.network", Server.ConnectionType.TCP, 50001),
new Server("electrumx.alexridevski.net", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.lukechilds.co", Server.ConnectionType.TCP, 50001),
new Server("electrum.poiuty.com", Server.ConnectionType.TCP, 50001),
new Server("horsey.cryptocowboys.net", Server.ConnectionType.TCP, 50001),
new Server("128.0.190.26", Server.ConnectionType.SSL, 50002),
new Server("hodlers.beer", Server.ConnectionType.SSL, 50002),
new Server("electrumx.erbium.eu", Server.ConnectionType.TCP, 50001),
new Server("electrumx.erbium.eu", Server.ConnectionType.SSL, 50002),
new Server("btc.lastingcoin.net", Server.ConnectionType.SSL, 50002),
new Server("electrum.bitaroo.net", Server.ConnectionType.SSL, 50002),
new Server("bitcoin.grey.pw", Server.ConnectionType.SSL, 50002),
new Server("2electrumx.hopto.me", Server.ConnectionType.SSL, 56022),
new Server("185.64.116.15", Server.ConnectionType.SSL, 50002),
new Server("kirsche.emzy.de", Server.ConnectionType.SSL, 50002),
new Server("alviss.coinjoined.com", Server.ConnectionType.SSL, 50002),
new Server("electrum.emzy.de", Server.ConnectionType.SSL, 50002),
new Server("electrum.emzy.de", Server.ConnectionType.TCP, 50001),
new Server("electrum-server.ninja", Server.ConnectionType.TCP, 50081),
new Server("bitcoin.electrumx.multicoin.co", Server.ConnectionType.TCP, 50001),
new Server("esx.geekhosters.com", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.grey.pw", Server.ConnectionType.TCP, 50003),
new Server("exs.ignorelist.com", Server.ConnectionType.TCP, 50001),
new Server("electrum.coinext.com.br", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.aranguren.org", Server.ConnectionType.TCP, 50001),
new Server("skbxmit.coinjoined.com", Server.ConnectionType.TCP, 50001),
new Server("alviss.coinjoined.com", Server.ConnectionType.TCP, 50001),
new Server("electrum2.privateservers.network", Server.ConnectionType.TCP, 50001),
new Server("electrumx.schulzemic.net", Server.ConnectionType.TCP, 50001),
new Server("bitcoins.sk", Server.ConnectionType.TCP, 56001),
new Server("node.mendonca.xyz", Server.ConnectionType.TCP, 50001),
new Server("bitcoin.aranguren.org", Server.ConnectionType.TCP, 50001));
new Server("vmd71287.contaboserver.net", Server.ConnectionType.SSL, 50002),
new Server("btc.litepay.ch", Server.ConnectionType.SSL, 50002),
new Server("electrum.stippy.com", Server.ConnectionType.SSL, 50002),
new Server("xtrum.com", Server.ConnectionType.SSL, 50002),
new Server("electrum.acinq.co", Server.ConnectionType.SSL, 50002),
new Server("electrum2.taborsky.cz", Server.ConnectionType.SSL, 50002),
new Server("vmd63185.contaboserver.net", Server.ConnectionType.SSL, 50002),
new Server("electrum2.privateservers.network", Server.ConnectionType.SSL, 50002),
new Server("electrumx.alexridevski.net", Server.ConnectionType.SSL, 50002),
new Server("192.166.219.200", Server.ConnectionType.SSL, 50002),
new Server("2ex.digitaleveryware.com", Server.ConnectionType.SSL, 50002),
new Server("dxm.no-ip.biz", Server.ConnectionType.SSL, 50002),
new Server("caleb.vegas", Server.ConnectionType.SSL, 50002),
new Server("ecdsa.net", Server.ConnectionType.SSL, 110),
new Server("electrum.hsmiths.com", Server.ConnectionType.SSL, 995),
new Server("elec.luggs.co", Server.ConnectionType.SSL, 443),
new Server("btc.smsys.me", Server.ConnectionType.SSL, 995));
}
@Override
@@ -96,10 +97,8 @@ public class Bitcoin extends Bitcoiny {
@Override
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
new Server("electrum.blockstream.info", Server.ConnectionType.TCP, 60001),
new Server("electrum.blockstream.info", Server.ConnectionType.SSL, 60002),
new Server("tn.not.fyi", Server.ConnectionType.SSL, 55002),
new Server("electrumx-test.1209k.com", Server.ConnectionType.SSL, 50002),
new Server("testnet.qtornado.com", Server.ConnectionType.TCP, 51001),
new Server("testnet.qtornado.com", Server.ConnectionType.SSL, 51002),
new Server("testnet.aranguren.org", Server.ConnectionType.TCP, 51001),
new Server("testnet.aranguren.org", Server.ConnectionType.SSL, 51002),

View File

@@ -872,7 +872,8 @@ public class BitcoinACCTv1 implements ACCT {
return (int) ((lockTimeA + (offerMessageTimestamp / 1000L)) / 2L);
}
public static byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
@Override
public byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
String atAddress = crossChainTradeData.qortalAtAddress;
String redeemerAddress = crossChainTradeData.qortalPartnerAddress;

View File

@@ -91,7 +91,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return this.params;
}
// Interface obligations
// Interface obligations
@Override
public boolean isValidAddress(String address) {
@@ -169,9 +169,14 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return this.bitcoinjContext.getFeePerKb();
}
/** Returns minimum order size in sats. To be overridden for coins that need to restrict order size. */
public long getMinimumOrderAmount() {
return 0L;
}
/**
* Returns fixed P2SH spending fee, in sats per 1000bytes, optionally for historic timestamp.
*
*
* @param timestamp optional milliseconds since epoch, or null for 'now'
* @return sats per 1000bytes
* @throws ForeignBlockchainException if something went wrong
@@ -271,7 +276,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/**
* Returns bitcoinj transaction sending <tt>amount</tt> to <tt>recipient</tt>.
*
*
* @param xprv58 BIP32 private key
* @param recipient P2PKH address
* @param amount unscaled amount
@@ -303,7 +308,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/**
* Returns bitcoinj transaction sending <tt>amount</tt> to <tt>recipient</tt> using default fees.
*
*
* @param xprv58 BIP32 private key
* @param recipient P2PKH address
* @param amount unscaled amount
@@ -332,7 +337,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return balance.value;
}
public List<BitcoinyTransaction> getWalletTransactions(String key58) throws ForeignBlockchainException {
public List<SimpleTransaction> getWalletTransactions(String key58) throws ForeignBlockchainException {
Context.propagate(bitcoinjContext);
Wallet wallet = walletFromDeterministicKey58(key58);
@@ -344,7 +349,12 @@ public abstract class Bitcoiny implements ForeignBlockchain {
List<DeterministicKey> keys = new ArrayList<>(keyChain.getLeafKeys());
Set<BitcoinyTransaction> walletTransactions = new HashSet<>();
Set<String> keySet = new HashSet<>();
// Set the number of consecutive empty batches required before giving up
final int numberOfAdditionalBatchesToSearch = 5;
int unusedCounter = 0;
int ki = 0;
do {
boolean areAllKeysUnused = true;
@@ -354,6 +364,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Check for transactions
Address address = Address.fromKey(this.params, dKey, ScriptType.P2PKH);
keySet.add(address.toString());
byte[] script = ScriptBuilder.createOutputScript(address).getProgram();
// Ask for transaction history - if it's empty then key has never been used
@@ -367,9 +378,19 @@ public abstract class Bitcoiny implements ForeignBlockchain {
}
}
if (areAllKeysUnused)
// No transactions for this batch of keys so assume we're done searching.
break;
if (areAllKeysUnused) {
// No transactions
if (unusedCounter >= numberOfAdditionalBatchesToSearch) {
// ... and we've hit our search limit
break;
}
// We haven't hit our search limit yet so increment the counter and keep looking
unusedCounter++;
}
else {
// Some keys in this batch were used, so reset the counter
unusedCounter = 0;
}
// Generate some more keys
keys.addAll(generateMoreKeys(keyChain));
@@ -377,9 +398,56 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Process new keys
} while (true);
Comparator<BitcoinyTransaction> newestTimestampFirstComparator = Comparator.comparingInt((BitcoinyTransaction txn) -> txn.timestamp).reversed();
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp).reversed();
return walletTransactions.stream().sorted(newestTimestampFirstComparator).collect(Collectors.toList());
return walletTransactions.stream().map(t -> convertToSimpleTransaction(t, keySet)).sorted(newestTimestampFirstComparator).collect(Collectors.toList());
}
protected SimpleTransaction convertToSimpleTransaction(BitcoinyTransaction t, Set<String> keySet) {
long amount = 0;
long total = 0L;
long totalInputAmount = 0L;
long totalOutputAmount = 0L;
List<SimpleTransaction.Input> inputs = new ArrayList<>();
List<SimpleTransaction.Output> outputs = new ArrayList<>();
for (BitcoinyTransaction.Input input : t.inputs) {
try {
BitcoinyTransaction t2 = getTransaction(input.outputTxHash);
List<String> senders = t2.outputs.get(input.outputVout).addresses;
long inputAmount = t2.outputs.get(input.outputVout).value;
totalInputAmount += inputAmount;
for (String sender : senders) {
boolean addressInWallet = false;
if (keySet.contains(sender)) {
total += inputAmount;
addressInWallet = true;
}
inputs.add(new SimpleTransaction.Input(sender, inputAmount, addressInWallet));
}
} catch (ForeignBlockchainException e) {
LOGGER.trace("Failed to retrieve transaction information {}", input.outputTxHash);
}
}
if (t.outputs != null && !t.outputs.isEmpty()) {
for (BitcoinyTransaction.Output output : t.outputs) {
for (String address : output.addresses) {
boolean addressInWallet = false;
if (keySet.contains(address)) {
if (total > 0L) {
amount -= (total - output.value);
} else {
amount += output.value;
}
addressInWallet = true;
}
outputs.add(new SimpleTransaction.Output(address, output.value, addressInWallet));
}
totalOutputAmount += output.value;
}
}
long fee = totalInputAmount - totalOutputAmount;
return new SimpleTransaction(t.txHash, t.timestamp, amount, fee, inputs, outputs);
}
/**
@@ -421,7 +489,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
* If there are no unspent outputs then either:
* a) all the outputs have been spent
* b) address has never been used
*
*
* For case (a) we want to remember not to check this address (key) again.
*/
@@ -501,7 +569,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
* If there are no unspent outputs then either:
* a) all the outputs have been spent
* b) address has never been used
*
*
* For case (a) we want to remember not to check this address (key) again.
*/

View File

@@ -10,7 +10,6 @@ import java.util.Map;
import java.util.function.Function;
import org.bitcoinj.core.Address;
import org.bitcoinj.core.Base58;
import org.bitcoinj.core.Coin;
import org.bitcoinj.core.ECKey;
import org.bitcoinj.core.LegacyAddress;
@@ -25,6 +24,7 @@ import org.bitcoinj.script.ScriptBuilder;
import org.bitcoinj.script.ScriptChunk;
import org.bitcoinj.script.ScriptOpCodes;
import org.qortal.crypto.Crypto;
import org.qortal.utils.Base58;
import org.qortal.utils.BitTwiddling;
import com.google.common.hash.HashCode;

View File

@@ -0,0 +1,171 @@
package org.qortal.crosschain;
import org.bitcoinj.core.Coin;
import org.bitcoinj.core.Context;
import org.bitcoinj.core.NetworkParameters;
import org.libdohj.params.DogecoinMainNetParams;
//import org.libdohj.params.DogecoinRegTestParams;
import org.libdohj.params.DogecoinTestNet3Params;
import org.qortal.crosschain.ElectrumX.Server;
import org.qortal.crosschain.ElectrumX.Server.ConnectionType;
import org.qortal.settings.Settings;
import java.util.Arrays;
import java.util.Collection;
import java.util.EnumMap;
import java.util.Map;
public class Dogecoin extends Bitcoiny {
public static final String CURRENCY_CODE = "DOGE";
private static final Coin DEFAULT_FEE_PER_KB = Coin.valueOf(500000000); // 5 DOGE per 1000 bytes
private static final long MINIMUM_ORDER_AMOUNT = 300000000L; // 3 DOGE minimum order. The RPC dust threshold is around 2 DOGE
// Temporary values until a dynamic fee system is written.
private static final long MAINNET_FEE = 110000000L;
private static final long NON_MAINNET_FEE = 10000L; // TODO: calibrate this
private static final Map<ConnectionType, Integer> DEFAULT_ELECTRUMX_PORTS = new EnumMap<>(ConnectionType.class);
static {
DEFAULT_ELECTRUMX_PORTS.put(ConnectionType.TCP, 50001);
DEFAULT_ELECTRUMX_PORTS.put(ConnectionType.SSL, 50002);
}
public enum DogecoinNet {
MAIN {
@Override
public NetworkParameters getParams() {
return DogecoinMainNetParams.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("electrum1.cipig.net", ConnectionType.TCP, 10060),
new Server("electrum2.cipig.net", ConnectionType.TCP, 10060),
new Server("electrum3.cipig.net", ConnectionType.TCP, 10060));
// TODO: add more mainnet servers. It's too centralized.
}
@Override
public String getGenesisHash() {
return "1a91e3dace36e2be3bf030a65679fe821aa1d6ef92e7c9902eb318182c355691";
}
@Override
public long getP2shFee(Long timestamp) {
// TODO: This will need to be replaced with something better in the near future!
return MAINNET_FEE;
}
},
TEST3 {
@Override
public NetworkParameters getParams() {
return DogecoinTestNet3Params.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(); // TODO: find testnet servers
}
@Override
public String getGenesisHash() {
return "4966625a4b2851d9fdee139e56211a0d88575f59ed816ff5e6a63deb4e3e29a0";
}
@Override
public long getP2shFee(Long timestamp) {
return NON_MAINNET_FEE;
}
},
REGTEST {
@Override
public NetworkParameters getParams() {
return null; // TODO: DogecoinRegTestParams.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(
new Server("localhost", ConnectionType.TCP, 50001),
new Server("localhost", ConnectionType.SSL, 50002));
}
@Override
public String getGenesisHash() {
// This is unique to each regtest instance
return null;
}
@Override
public long getP2shFee(Long timestamp) {
return NON_MAINNET_FEE;
}
};
public abstract NetworkParameters getParams();
public abstract Collection<Server> getServers();
public abstract String getGenesisHash();
public abstract long getP2shFee(Long timestamp) throws ForeignBlockchainException;
}
private static Dogecoin instance;
private final DogecoinNet dogecoinNet;
// Constructors and instance
private Dogecoin(DogecoinNet dogecoinNet, BitcoinyBlockchainProvider blockchain, Context bitcoinjContext, String currencyCode) {
super(blockchain, bitcoinjContext, currencyCode);
this.dogecoinNet = dogecoinNet;
LOGGER.info(() -> String.format("Starting Dogecoin support using %s", this.dogecoinNet.name()));
}
public static synchronized Dogecoin getInstance() {
if (instance == null) {
DogecoinNet dogecoinNet = Settings.getInstance().getDogecoinNet();
BitcoinyBlockchainProvider electrumX = new ElectrumX("Dogecoin-" + dogecoinNet.name(), dogecoinNet.getGenesisHash(), dogecoinNet.getServers(), DEFAULT_ELECTRUMX_PORTS);
Context bitcoinjContext = new Context(dogecoinNet.getParams());
instance = new Dogecoin(dogecoinNet, electrumX, bitcoinjContext, CURRENCY_CODE);
}
return instance;
}
// Getters & setters
public static synchronized void resetForTesting() {
instance = null;
}
// Actual useful methods for use by other classes
@Override
public Coin getFeePerKb() {
return DEFAULT_FEE_PER_KB;
}
@Override
public long getMinimumOrderAmount() {
return MINIMUM_ORDER_AMOUNT;
}
/**
* Returns estimated DOGE fee, in sats per 1000bytes, optionally for historic timestamp.
*
* @param timestamp optional milliseconds since epoch, or null for 'now'
* @return sats per 1000bytes, or throws ForeignBlockchainException if something went wrong
*/
@Override
public long getP2shFee(Long timestamp) throws ForeignBlockchainException {
return this.dogecoinNet.getP2shFee(timestamp);
}
}

View File

@@ -0,0 +1,855 @@
package org.qortal.crosschain;
import com.google.common.hash.HashCode;
import com.google.common.primitives.Bytes;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.ciyam.at.*;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.at.QortalFunctionCode;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.transaction.MessageTransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.utils.Base58;
import org.qortal.utils.BitTwiddling;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.List;
import static org.ciyam.at.OpCode.calcOffset;
/**
* Cross-chain trade AT
*
* <p>
* <ul>
* <li>Bob generates Dogecoin & Qortal 'trade' keys
* <ul>
* <li>private key required to sign P2SH redeem tx</li>
* <li>private key could be used to create 'secret' (e.g. double-SHA256)</li>
* <li>encrypted private key could be stored in Qortal AT for access by Bob from any node</li>
* </ul>
* </li>
* <li>Bob deploys Qortal AT
* <ul>
* </ul>
* </li>
* <li>Alice finds Qortal AT and wants to trade
* <ul>
* <li>Alice generates Dogecoin & Qortal 'trade' keys</li>
* <li>Alice funds Dogecoin P2SH-A</li>
* <li>Alice sends 'offer' MESSAGE to Bob from her Qortal trade address, containing:
* <ul>
* <li>hash-of-secret-A</li>
* <li>her 'trade' Dogecoin PKH</li>
* </ul>
* </li>
* </ul>
* </li>
* <li>Bob receives "offer" MESSAGE
* <ul>
* <li>Checks Alice's P2SH-A</li>
* <li>Sends 'trade' MESSAGE to Qortal AT from his trade address, containing:
* <ul>
* <li>Alice's trade Qortal address</li>
* <li>Alice's trade Dogecoin PKH</li>
* <li>hash-of-secret-A</li>
* </ul>
* </li>
* </ul>
* </li>
* <li>Alice checks Qortal AT to confirm it's locked to her
* <ul>
* <li>Alice sends 'redeem' MESSAGE to Qortal AT from her trade address, containing:
* <ul>
* <li>secret-A</li>
* <li>Qortal receiving address of her chosing</li>
* </ul>
* </li>
* <li>AT's QORT funds are sent to Qortal receiving address</li>
* </ul>
* </li>
* <li>Bob checks AT, extracts secret-A
* <ul>
* <li>Bob redeems P2SH-A using his Dogecoin trade key and secret-A</li>
* <li>P2SH-A DOGE funds end up at Dogecoin address determined by redeem transaction output(s)</li>
* </ul>
* </li>
* </ul>
*/
public class DogecoinACCTv1 implements ACCT {
private static final Logger LOGGER = LogManager.getLogger(DogecoinACCTv1.class);
public static final String NAME = DogecoinACCTv1.class.getSimpleName();
public static final byte[] CODE_BYTES_HASH = HashCode.fromString("0eb49b0313ff3855a29d860c2a8203faa2ef62e28ea30459321f176079cfa3a5").asBytes(); // SHA256 of AT code bytes
public static final int SECRET_LENGTH = 32;
/** <b>Value</b> offset into AT segment where 'mode' variable (long) is stored. (Multiply by MachineState.VALUE_SIZE for byte offset). */
private static final int MODE_VALUE_OFFSET = 61;
/** <b>Byte</b> offset into AT state data where 'mode' variable (long) is stored. */
public static final int MODE_BYTE_OFFSET = MachineState.HEADER_LENGTH + (MODE_VALUE_OFFSET * MachineState.VALUE_SIZE);
public static class OfferMessageData {
public byte[] partnerDogecoinPKH;
public byte[] hashOfSecretA;
public long lockTimeA;
}
public static final int OFFER_MESSAGE_LENGTH = 20 /*partnerDogecoinPKH*/ + 20 /*hashOfSecretA*/ + 8 /*lockTimeA*/;
public static final int TRADE_MESSAGE_LENGTH = 32 /*partner's Qortal trade address (padded from 25 to 32)*/
+ 24 /*partner's Dogecoin PKH (padded from 20 to 24)*/
+ 8 /*AT trade timeout (minutes)*/
+ 24 /*hash of secret-A (padded from 20 to 24)*/
+ 8 /*lockTimeA*/;
public static final int REDEEM_MESSAGE_LENGTH = 32 /*secret-A*/ + 32 /*partner's Qortal receiving address padded from 25 to 32*/;
public static final int CANCEL_MESSAGE_LENGTH = 32 /*AT creator's Qortal address*/;
private static DogecoinACCTv1 instance;
private DogecoinACCTv1() {
}
public static synchronized DogecoinACCTv1 getInstance() {
if (instance == null)
instance = new DogecoinACCTv1();
return instance;
}
@Override
public byte[] getCodeBytesHash() {
return CODE_BYTES_HASH;
}
@Override
public int getModeByteOffset() {
return MODE_BYTE_OFFSET;
}
@Override
public ForeignBlockchain getBlockchain() {
return Dogecoin.getInstance();
}
/**
* Returns Qortal AT creation bytes for cross-chain trading AT.
* <p>
* <tt>tradeTimeout</tt> (minutes) is the time window for the trade partner to send the
* 32-byte secret to the AT, before the AT automatically refunds the AT's creator.
*
* @param creatorTradeAddress AT creator's trade Qortal address
* @param dogecoinPublicKeyHash 20-byte HASH160 of creator's trade Dogecoin public key
* @param qortAmount how much QORT to pay trade partner if they send correct 32-byte secrets to AT
* @param dogecoinAmount how much DOGE the AT creator is expecting to trade
* @param tradeTimeout suggested timeout for entire trade
*/
public static byte[] buildQortalAT(String creatorTradeAddress, byte[] dogecoinPublicKeyHash, long qortAmount, long dogecoinAmount, int tradeTimeout) {
if (dogecoinPublicKeyHash.length != 20)
throw new IllegalArgumentException("Dogecoin public key hash should be 20 bytes");
// Labels for data segment addresses
int addrCounter = 0;
// Constants (with corresponding dataByteBuffer.put*() calls below)
final int addrCreatorTradeAddress1 = addrCounter++;
final int addrCreatorTradeAddress2 = addrCounter++;
final int addrCreatorTradeAddress3 = addrCounter++;
final int addrCreatorTradeAddress4 = addrCounter++;
final int addrDogecoinPublicKeyHash = addrCounter;
addrCounter += 4;
final int addrQortAmount = addrCounter++;
final int addrDogecoinAmount = addrCounter++;
final int addrTradeTimeout = addrCounter++;
final int addrMessageTxnType = addrCounter++;
final int addrExpectedTradeMessageLength = addrCounter++;
final int addrExpectedRedeemMessageLength = addrCounter++;
final int addrCreatorAddressPointer = addrCounter++;
final int addrQortalPartnerAddressPointer = addrCounter++;
final int addrMessageSenderPointer = addrCounter++;
final int addrTradeMessagePartnerDogecoinPKHOffset = addrCounter++;
final int addrPartnerDogecoinPKHPointer = addrCounter++;
final int addrTradeMessageHashOfSecretAOffset = addrCounter++;
final int addrHashOfSecretAPointer = addrCounter++;
final int addrRedeemMessageReceivingAddressOffset = addrCounter++;
final int addrMessageDataPointer = addrCounter++;
final int addrMessageDataLength = addrCounter++;
final int addrPartnerReceivingAddressPointer = addrCounter++;
final int addrEndOfConstants = addrCounter;
// Variables
final int addrCreatorAddress1 = addrCounter++;
final int addrCreatorAddress2 = addrCounter++;
final int addrCreatorAddress3 = addrCounter++;
final int addrCreatorAddress4 = addrCounter++;
final int addrQortalPartnerAddress1 = addrCounter++;
final int addrQortalPartnerAddress2 = addrCounter++;
final int addrQortalPartnerAddress3 = addrCounter++;
final int addrQortalPartnerAddress4 = addrCounter++;
final int addrLockTimeA = addrCounter++;
final int addrRefundTimeout = addrCounter++;
final int addrRefundTimestamp = addrCounter++;
final int addrLastTxnTimestamp = addrCounter++;
final int addrBlockTimestamp = addrCounter++;
final int addrTxnType = addrCounter++;
final int addrResult = addrCounter++;
final int addrMessageSender1 = addrCounter++;
final int addrMessageSender2 = addrCounter++;
final int addrMessageSender3 = addrCounter++;
final int addrMessageSender4 = addrCounter++;
final int addrMessageLength = addrCounter++;
final int addrMessageData = addrCounter;
addrCounter += 4;
final int addrHashOfSecretA = addrCounter;
addrCounter += 4;
final int addrPartnerDogecoinPKH = addrCounter;
addrCounter += 4;
final int addrPartnerReceivingAddress = addrCounter;
addrCounter += 4;
final int addrMode = addrCounter++;
assert addrMode == MODE_VALUE_OFFSET : String.format("addrMode %d does not match MODE_VALUE_OFFSET %d", addrMode, MODE_VALUE_OFFSET);
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
// AT creator's trade Qortal address, decoded from Base58
assert dataByteBuffer.position() == addrCreatorTradeAddress1 * MachineState.VALUE_SIZE : "addrCreatorTradeAddress1 incorrect";
byte[] creatorTradeAddressBytes = Base58.decode(creatorTradeAddress);
dataByteBuffer.put(Bytes.ensureCapacity(creatorTradeAddressBytes, 32, 0));
// Dogecoin public key hash
assert dataByteBuffer.position() == addrDogecoinPublicKeyHash * MachineState.VALUE_SIZE : "addrDogecoinPublicKeyHash incorrect";
dataByteBuffer.put(Bytes.ensureCapacity(dogecoinPublicKeyHash, 32, 0));
// Redeem Qort amount
assert dataByteBuffer.position() == addrQortAmount * MachineState.VALUE_SIZE : "addrQortAmount incorrect";
dataByteBuffer.putLong(qortAmount);
// Expected Dogecoin amount
assert dataByteBuffer.position() == addrDogecoinAmount * MachineState.VALUE_SIZE : "addrDogecoinAmount incorrect";
dataByteBuffer.putLong(dogecoinAmount);
// Suggested trade timeout (minutes)
assert dataByteBuffer.position() == addrTradeTimeout * MachineState.VALUE_SIZE : "addrTradeTimeout incorrect";
dataByteBuffer.putLong(tradeTimeout);
// We're only interested in MESSAGE transactions
assert dataByteBuffer.position() == addrMessageTxnType * MachineState.VALUE_SIZE : "addrMessageTxnType incorrect";
dataByteBuffer.putLong(API.ATTransactionType.MESSAGE.value);
// Expected length of 'trade' MESSAGE data from AT creator
assert dataByteBuffer.position() == addrExpectedTradeMessageLength * MachineState.VALUE_SIZE : "addrExpectedTradeMessageLength incorrect";
dataByteBuffer.putLong(TRADE_MESSAGE_LENGTH);
// Expected length of 'redeem' MESSAGE data from trade partner
assert dataByteBuffer.position() == addrExpectedRedeemMessageLength * MachineState.VALUE_SIZE : "addrExpectedRedeemMessageLength incorrect";
dataByteBuffer.putLong(REDEEM_MESSAGE_LENGTH);
// Index into data segment of AT creator's address, used by GET_B_IND
assert dataByteBuffer.position() == addrCreatorAddressPointer * MachineState.VALUE_SIZE : "addrCreatorAddressPointer incorrect";
dataByteBuffer.putLong(addrCreatorAddress1);
// Index into data segment of partner's Qortal address, used by SET_B_IND
assert dataByteBuffer.position() == addrQortalPartnerAddressPointer * MachineState.VALUE_SIZE : "addrQortalPartnerAddressPointer incorrect";
dataByteBuffer.putLong(addrQortalPartnerAddress1);
// Index into data segment of (temporary) transaction's sender's address, used by GET_B_IND
assert dataByteBuffer.position() == addrMessageSenderPointer * MachineState.VALUE_SIZE : "addrMessageSenderPointer incorrect";
dataByteBuffer.putLong(addrMessageSender1);
// Offset into 'trade' MESSAGE data payload for extracting partner's Dogecoin PKH
assert dataByteBuffer.position() == addrTradeMessagePartnerDogecoinPKHOffset * MachineState.VALUE_SIZE : "addrTradeMessagePartnerDogecoinPKHOffset incorrect";
dataByteBuffer.putLong(32L);
// Index into data segment of partner's Dogecoin PKH, used by GET_B_IND
assert dataByteBuffer.position() == addrPartnerDogecoinPKHPointer * MachineState.VALUE_SIZE : "addrPartnerDogecoinPKHPointer incorrect";
dataByteBuffer.putLong(addrPartnerDogecoinPKH);
// Offset into 'trade' MESSAGE data payload for extracting hash-of-secret-A
assert dataByteBuffer.position() == addrTradeMessageHashOfSecretAOffset * MachineState.VALUE_SIZE : "addrTradeMessageHashOfSecretAOffset incorrect";
dataByteBuffer.putLong(64L);
// Index into data segment to hash of secret A, used by GET_B_IND
assert dataByteBuffer.position() == addrHashOfSecretAPointer * MachineState.VALUE_SIZE : "addrHashOfSecretAPointer incorrect";
dataByteBuffer.putLong(addrHashOfSecretA);
// Offset into 'redeem' MESSAGE data payload for extracting Qortal receiving address
assert dataByteBuffer.position() == addrRedeemMessageReceivingAddressOffset * MachineState.VALUE_SIZE : "addrRedeemMessageReceivingAddressOffset incorrect";
dataByteBuffer.putLong(32L);
// Source location and length for hashing any passed secret
assert dataByteBuffer.position() == addrMessageDataPointer * MachineState.VALUE_SIZE : "addrMessageDataPointer incorrect";
dataByteBuffer.putLong(addrMessageData);
assert dataByteBuffer.position() == addrMessageDataLength * MachineState.VALUE_SIZE : "addrMessageDataLength incorrect";
dataByteBuffer.putLong(32L);
// Pointer into data segment of where to save partner's receiving Qortal address, used by GET_B_IND
assert dataByteBuffer.position() == addrPartnerReceivingAddressPointer * MachineState.VALUE_SIZE : "addrPartnerReceivingAddressPointer incorrect";
dataByteBuffer.putLong(addrPartnerReceivingAddress);
assert dataByteBuffer.position() == addrEndOfConstants * MachineState.VALUE_SIZE : "dataByteBuffer position not at end of constants";
// Code labels
Integer labelRefund = null;
Integer labelTradeTxnLoop = null;
Integer labelCheckTradeTxn = null;
Integer labelCheckCancelTxn = null;
Integer labelNotTradeNorCancelTxn = null;
Integer labelCheckNonRefundTradeTxn = null;
Integer labelTradeTxnExtract = null;
Integer labelRedeemTxnLoop = null;
Integer labelCheckRedeemTxn = null;
Integer labelCheckRedeemTxnSender = null;
Integer labelPayout = null;
ByteBuffer codeByteBuffer = ByteBuffer.allocate(768);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
/* Initialization */
/* NOP - to ensure DOGECOIN ACCT is unique */
codeByteBuffer.put(OpCode.NOP.compile());
// Use AT creation 'timestamp' as starting point for finding transactions sent to AT
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_CREATION_TIMESTAMP, addrLastTxnTimestamp));
// Load B register with AT creator's address so we can save it into addrCreatorAddress1-4
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_CREATOR_INTO_B));
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrCreatorAddressPointer));
// Set restart position to after this opcode
codeByteBuffer.put(OpCode.SET_PCS.compile());
/* Loop, waiting for message from AT creator's trade address containing trade partner details, or AT owner's address to cancel offer */
/* Transaction processing loop */
labelTradeTxnLoop = codeByteBuffer.position();
// Find next transaction (if any) to this AT since the last one (referenced by addrLastTxnTimestamp)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PUT_TX_AFTER_TIMESTAMP_INTO_A, addrLastTxnTimestamp));
// If no transaction found, A will be zero. If A is zero, set addrResult to 1, otherwise 0.
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.CHECK_A_IS_ZERO, addrResult));
// If addrResult is zero (i.e. A is non-zero, transaction was found) then go check transaction
codeByteBuffer.put(OpCode.BZR_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelCheckTradeTxn)));
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
/* Check transaction */
labelCheckTradeTxn = codeByteBuffer.position();
// Update our 'last found transaction's timestamp' using 'timestamp' from transaction
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TIMESTAMP_FROM_TX_IN_A, addrLastTxnTimestamp));
// Extract transaction type (message/payment) from transaction and save type in addrTxnType
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TYPE_FROM_TX_IN_A, addrTxnType));
// If transaction type is not MESSAGE type then go look for another transaction
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrTxnType, addrMessageTxnType, calcOffset(codeByteBuffer, labelTradeTxnLoop)));
/* Check transaction's sender. We're expecting AT creator's trade address for 'trade' message, or AT creator's own address for 'cancel' message. */
// Extract sender address from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_ADDRESS_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageSender1 (as pointed to by addrMessageSenderPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageSenderPointer));
// Compare each part of message sender's address with AT creator's trade address. If they don't match, check for cancel situation.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrCreatorTradeAddress1, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrCreatorTradeAddress2, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrCreatorTradeAddress3, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrCreatorTradeAddress4, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
// Message sender's address matches AT creator's trade address so go process 'trade' message
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelCheckNonRefundTradeTxn == null ? 0 : labelCheckNonRefundTradeTxn));
/* Checking message sender for possible cancel message */
labelCheckCancelTxn = codeByteBuffer.position();
// Compare each part of message sender's address with AT creator's address. If they don't match, look for another transaction.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrCreatorAddress1, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrCreatorAddress2, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrCreatorAddress3, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrCreatorAddress4, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
// Partner address is AT creator's address, so cancel offer and finish.
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.CANCELLED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
/* Not trade nor cancel message */
labelNotTradeNorCancelTxn = codeByteBuffer.position();
// Loop to find another transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelTradeTxnLoop == null ? 0 : labelTradeTxnLoop));
/* Possible switch-to-trade-mode message */
labelCheckNonRefundTradeTxn = codeByteBuffer.position();
// Check 'trade' message we received has expected number of message bytes
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(QortalFunctionCode.GET_MESSAGE_LENGTH_FROM_TX_IN_A.value, addrMessageLength));
// If message length matches, branch to info extraction code
codeByteBuffer.put(OpCode.BEQ_DAT.compile(addrMessageLength, addrExpectedTradeMessageLength, calcOffset(codeByteBuffer, labelTradeTxnExtract)));
// Message length didn't match - go back to finding another 'trade' MESSAGE transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelTradeTxnLoop == null ? 0 : labelTradeTxnLoop));
/* Extracting info from 'trade' MESSAGE transaction */
labelTradeTxnExtract = codeByteBuffer.position();
// Extract message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_MESSAGE_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrQortalPartnerAddress1 (as pointed to by addrQortalPartnerAddressPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrQortalPartnerAddressPointer));
// Extract trade partner's Dogecoin public key hash (PKH) from message into B
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrTradeMessagePartnerDogecoinPKHOffset));
// Store partner's Dogecoin PKH (we only really use values from B1-B3)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrPartnerDogecoinPKHPointer));
// Extract AT trade timeout (minutes) (from B4)
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_B4, addrRefundTimeout));
// Grab next 32 bytes
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrTradeMessageHashOfSecretAOffset));
// Extract hash-of-secret-A (we only really use values from B1-B3)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrHashOfSecretAPointer));
// Extract lockTime-A (from B4)
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_B4, addrLockTimeA));
// Calculate trade timeout refund 'timestamp' by adding addrRefundTimeout minutes to this transaction's 'timestamp', then save into addrRefundTimestamp
codeByteBuffer.put(OpCode.EXT_FUN_RET_DAT_2.compile(FunctionCode.ADD_MINUTES_TO_TIMESTAMP, addrRefundTimestamp, addrLastTxnTimestamp, addrRefundTimeout));
/* We are in 'trade mode' */
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.TRADING.value));
// Set restart position to after this opcode
codeByteBuffer.put(OpCode.SET_PCS.compile());
/* Loop, waiting for trade timeout or 'redeem' MESSAGE from Qortal trade partner */
// Fetch current block 'timestamp'
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_BLOCK_TIMESTAMP, addrBlockTimestamp));
// If we're not past refund 'timestamp' then look for next transaction
codeByteBuffer.put(OpCode.BLT_DAT.compile(addrBlockTimestamp, addrRefundTimestamp, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
// We're past refund 'timestamp' so go refund everything back to AT creator
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRefund == null ? 0 : labelRefund));
/* Transaction processing loop */
labelRedeemTxnLoop = codeByteBuffer.position();
// Find next transaction to this AT since the last one (if any)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PUT_TX_AFTER_TIMESTAMP_INTO_A, addrLastTxnTimestamp));
// If no transaction found, A will be zero. If A is zero, set addrComparator to 1, otherwise 0.
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.CHECK_A_IS_ZERO, addrResult));
// If addrResult is zero (i.e. A is non-zero, transaction was found) then go check transaction
codeByteBuffer.put(OpCode.BZR_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelCheckRedeemTxn)));
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
/* Check transaction */
labelCheckRedeemTxn = codeByteBuffer.position();
// Update our 'last found transaction's timestamp' using 'timestamp' from transaction
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TIMESTAMP_FROM_TX_IN_A, addrLastTxnTimestamp));
// Extract transaction type (message/payment) from transaction and save type in addrTxnType
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TYPE_FROM_TX_IN_A, addrTxnType));
// If transaction type is not MESSAGE type then go look for another transaction
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrTxnType, addrMessageTxnType, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
/* Check message payload length */
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(QortalFunctionCode.GET_MESSAGE_LENGTH_FROM_TX_IN_A.value, addrMessageLength));
// If message length matches, branch to sender checking code
codeByteBuffer.put(OpCode.BEQ_DAT.compile(addrMessageLength, addrExpectedRedeemMessageLength, calcOffset(codeByteBuffer, labelCheckRedeemTxnSender)));
// Message length didn't match - go back to finding another 'redeem' MESSAGE transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRedeemTxnLoop == null ? 0 : labelRedeemTxnLoop));
/* Check transaction's sender */
labelCheckRedeemTxnSender = codeByteBuffer.position();
// Extract sender address from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_ADDRESS_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageSender1 (as pointed to by addrMessageSenderPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageSenderPointer));
// Compare each part of transaction's sender's address with expected address. If they don't match, look for another transaction.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrQortalPartnerAddress1, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrQortalPartnerAddress2, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrQortalPartnerAddress3, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrQortalPartnerAddress4, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
/* Check 'secret-A' in transaction's message */
// Extract secret-A from first 32 bytes of message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_MESSAGE_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageData (as pointed to by addrMessageDataPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageDataPointer));
// Load B register with expected hash result (as pointed to by addrHashOfSecretAPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.SET_B_IND, addrHashOfSecretAPointer));
// Perform HASH160 using source data at addrMessageData. (Location and length specified via addrMessageDataPointer and addrMessageDataLength).
// Save the equality result (1 if they match, 0 otherwise) into addrResult.
codeByteBuffer.put(OpCode.EXT_FUN_RET_DAT_2.compile(FunctionCode.CHECK_HASH160_WITH_B, addrResult, addrMessageDataPointer, addrMessageDataLength));
// If hashes don't match, addrResult will be zero so go find another transaction
codeByteBuffer.put(OpCode.BNZ_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelPayout)));
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRedeemTxnLoop == null ? 0 : labelRedeemTxnLoop));
/* Success! Pay arranged amount to receiving address */
labelPayout = codeByteBuffer.position();
// Extract Qortal receiving address from next 32 bytes of message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrRedeemMessageReceivingAddressOffset));
// Save B register into data segment starting at addrPartnerReceivingAddress (as pointed to by addrPartnerReceivingAddressPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrPartnerReceivingAddressPointer));
// Pay AT's balance to receiving address
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PAY_TO_ADDRESS_IN_B, addrQortAmount));
// Set redeemed mode
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.REDEEMED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
// Fall-through to refunding any remaining balance back to AT creator
/* Refund balance back to AT creator */
labelRefund = codeByteBuffer.position();
// Set refunded mode
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.REFUNDED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile DOGE-QORT ACCT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
assert Arrays.equals(Crypto.digest(codeBytes), DogecoinACCTv1.CODE_BYTES_HASH)
: String.format("BTCACCT.CODE_BYTES_HASH mismatch: expected %s, actual %s", HashCode.fromBytes(CODE_BYTES_HASH), HashCode.fromBytes(Crypto.digest(codeBytes)));
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
@Override
public CrossChainTradeData populateTradeData(Repository repository, ATData atData) throws DataException {
ATStateData atStateData = repository.getATRepository().getLatestATState(atData.getATAddress());
return populateTradeData(repository, atData.getCreatorPublicKey(), atData.getCreation(), atStateData);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
@Override
public CrossChainTradeData populateTradeData(Repository repository, ATStateData atStateData) throws DataException {
ATData atData = repository.getATRepository().fromATAddress(atStateData.getATAddress());
return populateTradeData(repository, atData.getCreatorPublicKey(), atData.getCreation(), atStateData);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
public CrossChainTradeData populateTradeData(Repository repository, byte[] creatorPublicKey, long creationTimestamp, ATStateData atStateData) throws DataException {
byte[] addressBytes = new byte[25]; // for general use
String atAddress = atStateData.getATAddress();
CrossChainTradeData tradeData = new CrossChainTradeData();
tradeData.foreignBlockchain = SupportedBlockchain.DOGECOIN.name();
tradeData.acctName = NAME;
tradeData.qortalAtAddress = atAddress;
tradeData.qortalCreator = Crypto.toAddress(creatorPublicKey);
tradeData.creationTimestamp = creationTimestamp;
Account atAccount = new Account(repository, atAddress);
tradeData.qortBalance = atAccount.getConfirmedBalance(Asset.QORT);
byte[] stateData = atStateData.getStateData();
ByteBuffer dataByteBuffer = ByteBuffer.wrap(stateData);
dataByteBuffer.position(MachineState.HEADER_LENGTH);
/* Constants */
// Skip creator's trade address
dataByteBuffer.get(addressBytes);
tradeData.qortalCreatorTradeAddress = Base58.encode(addressBytes);
dataByteBuffer.position(dataByteBuffer.position() + 32 - addressBytes.length);
// Creator's Dogecoin/foreign public key hash
tradeData.creatorForeignPKH = new byte[20];
dataByteBuffer.get(tradeData.creatorForeignPKH);
dataByteBuffer.position(dataByteBuffer.position() + 32 - tradeData.creatorForeignPKH.length); // skip to 32 bytes
// We don't use secret-B
tradeData.hashOfSecretB = null;
// Redeem payout
tradeData.qortAmount = dataByteBuffer.getLong();
// Expected DOGE amount
tradeData.expectedForeignAmount = dataByteBuffer.getLong();
// Trade timeout
tradeData.tradeTimeout = (int) dataByteBuffer.getLong();
// Skip MESSAGE transaction type
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip expected 'trade' message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip expected 'redeem' message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to creator's address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's Qortal trade address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to message sender
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'trade' message data offset for partner's Dogecoin PKH
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's Dogecoin PKH
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'trade' message data offset for hash-of-secret-A
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to hash-of-secret-A
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'redeem' message data offset for partner's Qortal receiving address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to message data
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip message data length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's receiving address
dataByteBuffer.position(dataByteBuffer.position() + 8);
/* End of constants / begin variables */
// Skip AT creator's address
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Partner's trade address (if present)
dataByteBuffer.get(addressBytes);
String qortalRecipient = Base58.encode(addressBytes);
dataByteBuffer.position(dataByteBuffer.position() + 32 - addressBytes.length);
// Potential lockTimeA (if in trade mode)
int lockTimeA = (int) dataByteBuffer.getLong();
// AT refund timeout (probably only useful for debugging)
int refundTimeout = (int) dataByteBuffer.getLong();
// Trade-mode refund timestamp (AT 'timestamp' converted to Qortal block height)
long tradeRefundTimestamp = dataByteBuffer.getLong();
// Skip last transaction timestamp
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip block timestamp
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip transaction type
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary result
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary message sender
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Skip message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary message data
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Potential hash160 of secret A
byte[] hashOfSecretA = new byte[20];
dataByteBuffer.get(hashOfSecretA);
dataByteBuffer.position(dataByteBuffer.position() + 32 - hashOfSecretA.length); // skip to 32 bytes
// Potential partner's Dogecoin PKH
byte[] partnerDogecoinPKH = new byte[20];
dataByteBuffer.get(partnerDogecoinPKH);
dataByteBuffer.position(dataByteBuffer.position() + 32 - partnerDogecoinPKH.length); // skip to 32 bytes
// Partner's receiving address (if present)
byte[] partnerReceivingAddress = new byte[25];
dataByteBuffer.get(partnerReceivingAddress);
dataByteBuffer.position(dataByteBuffer.position() + 32 - partnerReceivingAddress.length); // skip to 32 bytes
// Trade AT's 'mode'
long modeValue = dataByteBuffer.getLong();
AcctMode mode = AcctMode.valueOf((int) (modeValue & 0xffL));
/* End of variables */
if (mode != null && mode != AcctMode.OFFERING) {
tradeData.mode = mode;
tradeData.refundTimeout = refundTimeout;
tradeData.tradeRefundHeight = new Timestamp(tradeRefundTimestamp).blockHeight;
tradeData.qortalPartnerAddress = qortalRecipient;
tradeData.hashOfSecretA = hashOfSecretA;
tradeData.partnerForeignPKH = partnerDogecoinPKH;
tradeData.lockTimeA = lockTimeA;
if (mode == AcctMode.REDEEMED)
tradeData.qortalPartnerReceivingAddress = Base58.encode(partnerReceivingAddress);
} else {
tradeData.mode = AcctMode.OFFERING;
}
tradeData.duplicateDeprecated();
return tradeData;
}
/** Returns 'offer' MESSAGE payload for trade partner to send to AT creator's trade address. */
public static byte[] buildOfferMessage(byte[] partnerBitcoinPKH, byte[] hashOfSecretA, int lockTimeA) {
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
return Bytes.concat(partnerBitcoinPKH, hashOfSecretA, lockTimeABytes);
}
/** Returns info extracted from 'offer' MESSAGE payload sent by trade partner to AT creator's trade address, or null if not valid. */
public static OfferMessageData extractOfferMessageData(byte[] messageData) {
if (messageData == null || messageData.length != OFFER_MESSAGE_LENGTH)
return null;
OfferMessageData offerMessageData = new OfferMessageData();
offerMessageData.partnerDogecoinPKH = Arrays.copyOfRange(messageData, 0, 20);
offerMessageData.hashOfSecretA = Arrays.copyOfRange(messageData, 20, 40);
offerMessageData.lockTimeA = BitTwiddling.longFromBEBytes(messageData, 40);
return offerMessageData;
}
/** Returns 'trade' MESSAGE payload for AT creator to send to AT. */
public static byte[] buildTradeMessage(String partnerQortalTradeAddress, byte[] partnerBitcoinPKH, byte[] hashOfSecretA, int lockTimeA, int refundTimeout) {
byte[] data = new byte[TRADE_MESSAGE_LENGTH];
byte[] partnerQortalAddressBytes = Base58.decode(partnerQortalTradeAddress);
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
byte[] refundTimeoutBytes = BitTwiddling.toBEByteArray((long) refundTimeout);
System.arraycopy(partnerQortalAddressBytes, 0, data, 0, partnerQortalAddressBytes.length);
System.arraycopy(partnerBitcoinPKH, 0, data, 32, partnerBitcoinPKH.length);
System.arraycopy(refundTimeoutBytes, 0, data, 56, refundTimeoutBytes.length);
System.arraycopy(hashOfSecretA, 0, data, 64, hashOfSecretA.length);
System.arraycopy(lockTimeABytes, 0, data, 88, lockTimeABytes.length);
return data;
}
/** Returns 'cancel' MESSAGE payload for AT creator to cancel trade AT. */
@Override
public byte[] buildCancelMessage(String creatorQortalAddress) {
byte[] data = new byte[CANCEL_MESSAGE_LENGTH];
byte[] creatorQortalAddressBytes = Base58.decode(creatorQortalAddress);
System.arraycopy(creatorQortalAddressBytes, 0, data, 0, creatorQortalAddressBytes.length);
return data;
}
/** Returns 'redeem' MESSAGE payload for trade partner to send to AT. */
public static byte[] buildRedeemMessage(byte[] secretA, String qortalReceivingAddress) {
byte[] data = new byte[REDEEM_MESSAGE_LENGTH];
byte[] qortalReceivingAddressBytes = Base58.decode(qortalReceivingAddress);
System.arraycopy(secretA, 0, data, 0, secretA.length);
System.arraycopy(qortalReceivingAddressBytes, 0, data, 32, qortalReceivingAddressBytes.length);
return data;
}
/** Returns refund timeout (minutes) based on trade partner's 'offer' MESSAGE timestamp and P2SH-A locktime. */
public static int calcRefundTimeout(long offerMessageTimestamp, int lockTimeA) {
// refund should be triggered halfway between offerMessageTimestamp and lockTimeA
return (int) ((lockTimeA - (offerMessageTimestamp / 1000L)) / 2L / 60L);
}
@Override
public byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
String atAddress = crossChainTradeData.qortalAtAddress;
String redeemerAddress = crossChainTradeData.qortalPartnerAddress;
// We don't have partner's public key so we check every message to AT
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(null, atAddress, null, null, null);
if (messageTransactionsData == null)
return null;
// Find 'redeem' message
for (MessageTransactionData messageTransactionData : messageTransactionsData) {
// Check message payload type/encryption
if (messageTransactionData.isText() || messageTransactionData.isEncrypted())
continue;
// Check message payload size
byte[] messageData = messageTransactionData.getData();
if (messageData.length != REDEEM_MESSAGE_LENGTH)
// Wrong payload length
continue;
// Check sender
if (!Crypto.toAddress(messageTransactionData.getSenderPublicKey()).equals(redeemerAddress))
// Wrong sender;
continue;
// Extract secretA
byte[] secretA = new byte[32];
System.arraycopy(messageData, 0, secretA, 0, secretA.length);
byte[] hashOfSecretA = Crypto.hash160(secretA);
if (!Arrays.equals(hashOfSecretA, crossChainTradeData.hashOfSecretA))
continue;
return secretA;
}
return null;
}
}

View File

@@ -33,6 +33,7 @@ import org.qortal.crypto.TrustlessSSLSocketFactory;
import com.google.common.hash.HashCode;
import com.google.common.primitives.Bytes;
import org.qortal.utils.BitTwiddling;
/** ElectrumX network support for querying Bitcoiny-related info like block headers, transaction outputs, etc. */
public class ElectrumX extends BitcoinyBlockchainProvider {
@@ -171,13 +172,41 @@ public class ElectrumX extends BitcoinyBlockchainProvider {
Long returnedCount = (Long) countObj;
String hex = (String) hexObj;
byte[] raw = HashCode.fromString(hex).asBytes();
if (raw.length != returnedCount * BLOCK_HEADER_LENGTH)
throw new ForeignBlockchainException.NetworkException("Unexpected raw header length in JSON from ElectrumX blockchain.block.headers RPC");
List<byte[]> rawBlockHeaders = new ArrayList<>(returnedCount.intValue());
for (int i = 0; i < returnedCount; ++i)
rawBlockHeaders.add(Arrays.copyOfRange(raw, i * BLOCK_HEADER_LENGTH, (i + 1) * BLOCK_HEADER_LENGTH));
byte[] raw = HashCode.fromString(hex).asBytes();
// Most chains use a fixed length 80 byte header, so block headers can be split up by dividing the hex into
// 80-byte segments. However, some chains such as DOGE use variable length headers due to AuxPoW or other
// reasons. In these cases we can identify the start of each block header by the location of the block version
// numbers. Each block starts with a version number, and for DOGE this is easily identifiable (6422788) at the
// time of writing (Jul 2021). If we encounter a chain that is using more generic version numbers (e.g. 1)
// and can't be used to accurately identify block indexes, then there are sufficient checks to ensure an
// exception is thrown.
if (raw.length == returnedCount * BLOCK_HEADER_LENGTH) {
// Fixed-length header (BTC, LTC, etc)
for (int i = 0; i < returnedCount; ++i) {
rawBlockHeaders.add(Arrays.copyOfRange(raw, i * BLOCK_HEADER_LENGTH, (i + 1) * BLOCK_HEADER_LENGTH));
}
}
else if (raw.length > returnedCount * BLOCK_HEADER_LENGTH) {
// Assume AuxPoW variable length header (DOGE)
int referenceVersion = BitTwiddling.intFromLEBytes(raw, 0); // DOGE uses 6422788 at time of commit (Jul 2021)
for (int i = 0; i < raw.length - 4; ++i) {
// Locate the start of each block by its version number
if (BitTwiddling.intFromLEBytes(raw, i) == referenceVersion) {
rawBlockHeaders.add(Arrays.copyOfRange(raw, i, i + BLOCK_HEADER_LENGTH));
}
}
// Ensure that we found the correct number of block headers
if (rawBlockHeaders.size() != count) {
throw new ForeignBlockchainException.NetworkException("Unexpected raw header contents in JSON from ElectrumX blockchain.block.headers RPC.");
}
}
else if (raw.length != returnedCount * BLOCK_HEADER_LENGTH) {
throw new ForeignBlockchainException.NetworkException("Unexpected raw header length in JSON from ElectrumX blockchain.block.headers RPC");
}
return rawBlockHeaders;
}
@@ -518,6 +547,7 @@ public class ElectrumX extends BitcoinyBlockchainProvider {
}
// Failed to perform RPC - maybe lack of servers?
LOGGER.info("Error: No connected Electrum servers when trying to make RPC call");
throw new ForeignBlockchainException.NetworkException(String.format("Failed to perform ElectrumX RPC %s", method));
}
}
@@ -623,18 +653,27 @@ public class ElectrumX extends BitcoinyBlockchainProvider {
Object errorObj = responseJson.get("error");
if (errorObj != null) {
if (errorObj instanceof String)
throw new ForeignBlockchainException.NetworkException(String.format("Unexpected error message from ElectrumX RPC %s: %s", method, (String) errorObj), this.currentServer);
if (errorObj instanceof String) {
LOGGER.debug(String.format("Unexpected error message from ElectrumX server %s for RPC method %s: %s", this.currentServer, method, (String) errorObj));
// Try another server
return null;
}
if (!(errorObj instanceof JSONObject))
throw new ForeignBlockchainException.NetworkException(String.format("Unexpected error response from ElectrumX RPC %s", method), this.currentServer);
if (!(errorObj instanceof JSONObject)) {
LOGGER.debug(String.format("Unexpected error response from ElectrumX server %s for RPC method %s", this.currentServer, method));
// Try another server
return null;
}
JSONObject errorJson = (JSONObject) errorObj;
Object messageObj = errorJson.get("message");
if (!(messageObj instanceof String))
throw new ForeignBlockchainException.NetworkException(String.format("Missing/invalid message in error response from ElectrumX RPC %s", method), this.currentServer);
if (!(messageObj instanceof String)) {
LOGGER.debug(String.format("Missing/invalid message in error response from ElectrumX server %s for RPC method %s", this.currentServer, method));
// Try another server
return null;
}
String message = (String) messageObj;

View File

@@ -6,4 +6,6 @@ public interface ForeignBlockchain {
public boolean isValidWalletKey(String walletKey);
public long getMinimumOrderAmount();
}

View File

@@ -51,7 +51,10 @@ public class Litecoin extends Bitcoiny {
new Server("ltc.rentonisk.com", Server.ConnectionType.TCP, 50001),
new Server("ltc.rentonisk.com", Server.ConnectionType.SSL, 50002),
new Server("electrum-ltc.petrkr.net", Server.ConnectionType.SSL, 60002),
new Server("ltc.litepay.ch", Server.ConnectionType.SSL, 50022));
new Server("ltc.litepay.ch", Server.ConnectionType.SSL, 50022),
new Server("electrum-ltc-bysh.me", Server.ConnectionType.TCP, 50002),
new Server("electrum.jochen-hoenicke.de", Server.ConnectionType.TCP, 50005),
new Server("node.ispol.sk", Server.ConnectionType.TCP, 50004));
}
@Override

View File

@@ -810,7 +810,8 @@ public class LitecoinACCTv1 implements ACCT {
return (int) ((lockTimeA - (offerMessageTimestamp / 1000L)) / 2L / 60L);
}
public static byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
@Override
public byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
String atAddress = crossChainTradeData.qortalAtAddress;
String redeemerAddress = crossChainTradeData.qortalPartnerAddress;

View File

@@ -0,0 +1,109 @@
package org.qortal.crosschain;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class SimpleTransaction {
private String txHash;
private Integer timestamp;
private long totalAmount;
private long feeAmount;
private List<Input> inputs;
private List<Output> outputs;
@XmlAccessorType(XmlAccessType.FIELD)
public static class Input {
private String address;
private long amount;
private boolean addressInWallet;
public Input() {
}
public Input(String address, long amount, boolean addressInWallet) {
this.address = address;
this.amount = amount;
this.addressInWallet = addressInWallet;
}
public String getAddress() {
return address;
}
public long getAmount() {
return amount;
}
public boolean getAddressInWallet() {
return addressInWallet;
}
}
@XmlAccessorType(XmlAccessType.FIELD)
public static class Output {
private String address;
private long amount;
private boolean addressInWallet;
public Output() {
}
public Output(String address, long amount, boolean addressInWallet) {
this.address = address;
this.amount = amount;
this.addressInWallet = addressInWallet;
}
public String getAddress() {
return address;
}
public long getAmount() {
return amount;
}
public boolean getAddressInWallet() {
return addressInWallet;
}
}
public SimpleTransaction() {
}
public SimpleTransaction(String txHash, Integer timestamp, long totalAmount, long feeAmount, List<Input> inputs, List<Output> outputs) {
this.txHash = txHash;
this.timestamp = timestamp;
this.totalAmount = totalAmount;
this.feeAmount = feeAmount;
this.inputs = inputs;
this.outputs = outputs;
}
public String getTxHash() {
return txHash;
}
public Integer getTimestamp() {
return timestamp;
}
public long getTotalAmount() {
return totalAmount;
}
public long getFeeAmount() {
return feeAmount;
}
public List<Input> getInputs() {
return this.inputs;
}
public List<Output> getOutputs() {
return this.outputs;
}
}

View File

@@ -39,6 +39,20 @@ public enum SupportedBlockchain {
public ACCT getLatestAcct() {
return LitecoinACCTv1.getInstance();
}
},
DOGECOIN(Arrays.asList(
Triple.valueOf(DogecoinACCTv1.NAME, DogecoinACCTv1.CODE_BYTES_HASH, DogecoinACCTv1::getInstance)
)) {
@Override
public ForeignBlockchain getInstance() {
return Dogecoin.getInstance();
}
@Override
public ACCT getLatestAcct() {
return DogecoinACCTv1.getInstance();
}
};
private static final Map<ByteArray, Supplier<ACCT>> supportedAcctsByCodeHash = Arrays.stream(SupportedBlockchain.values())
@@ -110,4 +124,4 @@ public enum SupportedBlockchain {
return acctInstanceSupplier.get();
}
}
}

View File

@@ -23,6 +23,7 @@ public class ATData {
private boolean isFrozen;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private Long frozenBalance;
private Long sleepUntilMessageTimestamp;
// Constructors
@@ -31,7 +32,8 @@ public class ATData {
}
public ATData(String ATAddress, byte[] creatorPublicKey, long creation, int version, long assetId, byte[] codeBytes, byte[] codeHash,
boolean isSleeping, Integer sleepUntilHeight, boolean isFinished, boolean hadFatalError, boolean isFrozen, Long frozenBalance) {
boolean isSleeping, Integer sleepUntilHeight, boolean isFinished, boolean hadFatalError, boolean isFrozen, Long frozenBalance,
Long sleepUntilMessageTimestamp) {
this.ATAddress = ATAddress;
this.creatorPublicKey = creatorPublicKey;
this.creation = creation;
@@ -45,6 +47,7 @@ public class ATData {
this.hadFatalError = hadFatalError;
this.isFrozen = isFrozen;
this.frozenBalance = frozenBalance;
this.sleepUntilMessageTimestamp = sleepUntilMessageTimestamp;
}
/** For constructing skeleton ATData with bare minimum info. */
@@ -133,4 +136,12 @@ public class ATData {
this.frozenBalance = frozenBalance;
}
public Long getSleepUntilMessageTimestamp() {
return this.sleepUntilMessageTimestamp;
}
public void setSleepUntilMessageTimestamp(Long sleepUntilMessageTimestamp) {
this.sleepUntilMessageTimestamp = sleepUntilMessageTimestamp;
}
}

View File

@@ -10,35 +10,32 @@ public class ATStateData {
private Long fees;
private boolean isInitial;
// Qortal-AT-specific
private Long sleepUntilMessageTimestamp;
// Constructors
/** Create new ATStateData */
public ATStateData(String ATAddress, Integer height, byte[] stateData, byte[] stateHash, Long fees, boolean isInitial) {
public ATStateData(String ATAddress, Integer height, byte[] stateData, byte[] stateHash, Long fees,
boolean isInitial, Long sleepUntilMessageTimestamp) {
this.ATAddress = ATAddress;
this.height = height;
this.stateData = stateData;
this.stateHash = stateHash;
this.fees = fees;
this.isInitial = isInitial;
this.sleepUntilMessageTimestamp = sleepUntilMessageTimestamp;
}
/** For recreating per-block ATStateData from repository where not all info is needed */
public ATStateData(String ATAddress, int height, byte[] stateHash, Long fees, boolean isInitial) {
this(ATAddress, height, null, stateHash, fees, isInitial);
}
/** For creating ATStateData from serialized bytes when we don't have all the info */
public ATStateData(String ATAddress, byte[] stateHash) {
// This won't ever be initial AT state from deployment as that's never serialized over the network,
// but generated when the DeployAtTransaction is processed locally.
this(ATAddress, null, null, stateHash, null, false);
this(ATAddress, height, null, stateHash, fees, isInitial, null);
}
/** For creating ATStateData from serialized bytes when we don't have all the info */
public ATStateData(String ATAddress, byte[] stateHash, Long fees) {
// This won't ever be initial AT state from deployment as that's never serialized over the network,
// but generated when the DeployAtTransaction is processed locally.
this(ATAddress, null, null, stateHash, fees, false);
// This won't ever be initial AT state from deployment, as that's never serialized over the network.
this(ATAddress, null, null, stateHash, fees, false, null);
}
// Getters / setters
@@ -72,4 +69,12 @@ public class ATStateData {
return this.isInitial;
}
public Long getSleepUntilMessageTimestamp() {
return this.sleepUntilMessageTimestamp;
}
public void setSleepUntilMessageTimestamp(Long sleepUntilMessageTimestamp) {
this.sleepUntilMessageTimestamp = sleepUntilMessageTimestamp;
}
}

View File

@@ -6,6 +6,9 @@ import javax.xml.bind.annotation.XmlTransient;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import io.swagger.v3.oas.annotations.media.Schema;
import org.json.JSONObject;
import org.qortal.utils.Base58;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
@@ -205,6 +208,58 @@ public class TradeBotData {
return this.receivingAccountInfo;
}
public JSONObject toJson() {
JSONObject jsonObject = new JSONObject();
jsonObject.put("tradePrivateKey", Base58.encode(this.getTradePrivateKey()));
jsonObject.put("acctName", this.getAcctName());
jsonObject.put("tradeState", this.getState());
jsonObject.put("tradeStateValue", this.getStateValue());
jsonObject.put("creatorAddress", this.getCreatorAddress());
jsonObject.put("atAddress", this.getAtAddress());
jsonObject.put("timestamp", this.getTimestamp());
jsonObject.put("qortAmount", this.getQortAmount());
if (this.getTradeNativePublicKey() != null) jsonObject.put("tradeNativePublicKey", Base58.encode(this.getTradeNativePublicKey()));
if (this.getTradeNativePublicKeyHash() != null) jsonObject.put("tradeNativePublicKeyHash", Base58.encode(this.getTradeNativePublicKeyHash()));
jsonObject.put("tradeNativeAddress", this.getTradeNativeAddress());
if (this.getSecret() != null) jsonObject.put("secret", Base58.encode(this.getSecret()));
if (this.getHashOfSecret() != null) jsonObject.put("hashOfSecret", Base58.encode(this.getHashOfSecret()));
jsonObject.put("foreignBlockchain", this.getForeignBlockchain());
if (this.getTradeForeignPublicKey() != null) jsonObject.put("tradeForeignPublicKey", Base58.encode(this.getTradeForeignPublicKey()));
if (this.getTradeForeignPublicKeyHash() != null) jsonObject.put("tradeForeignPublicKeyHash", Base58.encode(this.getTradeForeignPublicKeyHash()));
jsonObject.put("foreignKey", this.getForeignKey());
jsonObject.put("foreignAmount", this.getForeignAmount());
if (this.getLastTransactionSignature() != null) jsonObject.put("lastTransactionSignature", Base58.encode(this.getLastTransactionSignature()));
jsonObject.put("lockTimeA", this.getLockTimeA());
if (this.getReceivingAccountInfo() != null) jsonObject.put("receivingAccountInfo", Base58.encode(this.getReceivingAccountInfo()));
return jsonObject;
}
public static TradeBotData fromJson(JSONObject json) {
return new TradeBotData(
json.isNull("tradePrivateKey") ? null : Base58.decode(json.getString("tradePrivateKey")),
json.isNull("acctName") ? null : json.getString("acctName"),
json.isNull("tradeState") ? null : json.getString("tradeState"),
json.isNull("tradeStateValue") ? null : json.getInt("tradeStateValue"),
json.isNull("creatorAddress") ? null : json.getString("creatorAddress"),
json.isNull("atAddress") ? null : json.getString("atAddress"),
json.isNull("timestamp") ? null : json.getLong("timestamp"),
json.isNull("qortAmount") ? null : json.getLong("qortAmount"),
json.isNull("tradeNativePublicKey") ? null : Base58.decode(json.getString("tradeNativePublicKey")),
json.isNull("tradeNativePublicKeyHash") ? null : Base58.decode(json.getString("tradeNativePublicKeyHash")),
json.isNull("tradeNativeAddress") ? null : json.getString("tradeNativeAddress"),
json.isNull("secret") ? null : Base58.decode(json.getString("secret")),
json.isNull("hashOfSecret") ? null : Base58.decode(json.getString("hashOfSecret")),
json.isNull("foreignBlockchain") ? null : json.getString("foreignBlockchain"),
json.isNull("tradeForeignPublicKey") ? null : Base58.decode(json.getString("tradeForeignPublicKey")),
json.isNull("tradeForeignPublicKeyHash") ? null : Base58.decode(json.getString("tradeForeignPublicKeyHash")),
json.isNull("foreignAmount") ? null : json.getLong("foreignAmount"),
json.isNull("foreignKey") ? null : json.getString("foreignKey"),
json.isNull("lastTransactionSignature") ? null : Base58.decode(json.getString("lastTransactionSignature")),
json.isNull("lockTimeA") ? null : json.getInt("lockTimeA"),
json.isNull("receivingAccountInfo") ? null : Base58.decode(json.getString("receivingAccountInfo"))
);
}
// Mostly for debugging
public String toString() {
return String.format("%s: %s (%d)", this.atAddress, this.tradeState, this.tradeStateValue);

View File

@@ -23,17 +23,21 @@ public class Gui {
private SysTray sysTray = null;
private Gui() {
this.isHeadless = GraphicsEnvironment.isHeadless();
try {
this.isHeadless = GraphicsEnvironment.isHeadless();
if (!this.isHeadless) {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException | InstantiationException | IllegalAccessException
| UnsupportedLookAndFeelException e) {
// Use whatever look-and-feel comes by default then
if (!this.isHeadless) {
try {
UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName());
} catch (ClassNotFoundException | InstantiationException | IllegalAccessException
| UnsupportedLookAndFeelException e) {
// Use whatever look-and-feel comes by default then
}
showSplash();
}
showSplash();
} catch (Exception e) {
LOGGER.info("Unable to initialize GUI: {}", e.getMessage());
}
}

View File

@@ -1,15 +1,11 @@
package org.qortal.gui;
import java.awt.BorderLayout;
import java.awt.Image;
import java.awt.*;
import java.util.ArrayList;
import java.util.List;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import javax.swing.JDialog;
import javax.swing.JPanel;
import javax.swing.*;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -19,46 +15,53 @@ public class SplashFrame {
protected static final Logger LOGGER = LogManager.getLogger(SplashFrame.class);
private static SplashFrame instance;
private JDialog splashDialog;
private JFrame splashDialog;
@SuppressWarnings("serial")
public static class SplashPanel extends JPanel {
private BufferedImage image;
private String defaultSplash = "Qlogo_512.png";
public SplashPanel() {
image = Gui.loadImage("splash.png");
this.setPreferredSize(new Dimension(image.getWidth(), image.getHeight()));
this.setLayout(new BorderLayout());
image = Gui.loadImage(defaultSplash);
setOpaque(false);
setLayout(new GridBagLayout());
}
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, null);
g.drawImage(image, 0, 0, getWidth(), getHeight(), this);
}
@Override
public Dimension getPreferredSize() {
return new Dimension(500, 500);
}
}
private SplashFrame() {
this.splashDialog = new JDialog();
this.splashDialog = new JFrame();
List<Image> icons = new ArrayList<>();
icons.add(Gui.loadImage("icons/icon16.png"));
icons.add(Gui.loadImage("icons/icon32.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_synced.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing_time-alt.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_minting.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing.png"));
icons.add(Gui.loadImage("icons/icon64.png"));
icons.add(Gui.loadImage("icons/icon128.png"));
icons.add(Gui.loadImage("icons/Qlogo_128.png"));
this.splashDialog.setIconImages(icons);
this.splashDialog.setDefaultCloseOperation(JDialog.DO_NOTHING_ON_CLOSE);
this.splashDialog.setTitle("qortal");
this.splashDialog.setContentPane(new SplashPanel());
this.splashDialog.getContentPane().add(new SplashPanel());
this.splashDialog.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE);
this.splashDialog.setUndecorated(true);
this.splashDialog.setModal(false);
this.splashDialog.pack();
this.splashDialog.setLocationRelativeTo(null);
this.splashDialog.toFront();
this.splashDialog.setBackground(new Color(0,0,0,0));
this.splashDialog.setVisible(true);
this.splashDialog.repaint();
}
public static SplashFrame getInstance() {

View File

@@ -61,7 +61,7 @@ public class SysTray {
this.popupMenu = createJPopupMenu();
// Build TrayIcon without AWT PopupMenu (which doesn't support Unicode)...
this.trayIcon = new TrayIcon(Gui.loadImage("icons/icon32.png"), "qortal", null);
this.trayIcon = new TrayIcon(Gui.loadImage("icons/qortal_ui_tray_synced.png"), "qortal", null);
// ...and attach mouse listener instead so we can use JPopupMenu (which does support Unicode)
this.trayIcon.addMouseListener(new MouseAdapter() {
@Override
@@ -289,6 +289,29 @@ public class SysTray {
this.trayIcon.setToolTip(text);
}
public void setTrayIcon(int iconid) {
if (trayIcon != null) {
try {
switch (iconid) {
case 1:
this.trayIcon.setImage(Gui.loadImage("icons/qortal_ui_tray_syncing_time-alt.png"));
break;
case 2:
this.trayIcon.setImage(Gui.loadImage("icons/qortal_ui_tray_minting.png"));
break;
case 3:
this.trayIcon.setImage(Gui.loadImage("icons/qortal_ui_tray_syncing.png"));
break;
case 4:
this.trayIcon.setImage(Gui.loadImage("icons/qortal_ui_tray_synced.png"));
break;
}
} catch (NullPointerException e) {
LOGGER.info("Unable to set tray icon");
}
}
}
public void dispose() {
if (trayIcon != null)
SystemTray.getSystemTray().remove(this.trayIcon);

View File

@@ -0,0 +1,157 @@
package org.qortal.list;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.json.JSONArray;
import org.qortal.settings.Settings;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
public class ResourceList {
private static final Logger LOGGER = LogManager.getLogger(ResourceList.class);
private String category;
private String resourceName;
private List<String> list;
/**
* ResourceList
* Creates or updates a list for the purpose of tracking resources on the Qortal network
* This can be used for local blocking, or even for curating and sharing content lists
* Lists are backed off to JSON files (in the lists folder) to ease sharing between nodes and users
*
* @param category - for instance "blacklist", "whitelist", or "userlist"
* @param resourceName - for instance "address", "poll", or "group"
* @throws IOException
*/
public ResourceList(String category, String resourceName) throws IOException {
this.category = category;
this.resourceName = resourceName;
this.list = new ArrayList<>();
this.load();
}
/* Filesystem */
private Path getFilePath() {
String pathString = String.format("%s%s%s_%s.json", Settings.getInstance().getListsPath(),
File.separator, this.resourceName, this.category);
return Paths.get(pathString);
}
public void save() throws IOException {
if (this.resourceName == null) {
throw new IllegalStateException("Can't save list with missing resource name");
}
if (this.category == null) {
throw new IllegalStateException("Can't save list with missing category");
}
String jsonString = ResourceList.listToJSONString(this.list);
Path filePath = this.getFilePath();
// Create parent directory if needed
try {
Files.createDirectories(filePath.getParent());
} catch (IOException e) {
throw new IllegalStateException("Unable to create lists directory");
}
BufferedWriter writer = new BufferedWriter(new FileWriter(filePath.toString()));
writer.write(jsonString);
writer.close();
}
private boolean load() throws IOException {
Path path = this.getFilePath();
File resourceListFile = new File(path.toString());
if (!resourceListFile.exists()) {
return false;
}
try {
String jsonString = new String(Files.readAllBytes(path));
this.list = ResourceList.listFromJSONString(jsonString);
} catch (IOException e) {
throw new IOException(String.format("Couldn't read contents from file %s", path.toString()));
}
return true;
}
public boolean revert() {
try {
return this.load();
} catch (IOException e) {
LOGGER.info("Unable to revert {} {}", this.resourceName, this.category);
}
return false;
}
/* List management */
public void add(String resource) {
if (resource == null || this.list == null) {
return;
}
if (!this.contains(resource)) {
this.list.add(resource);
}
}
public void remove(String resource) {
if (resource == null || this.list == null) {
return;
}
this.list.remove(resource);
}
public boolean contains(String resource) {
if (resource == null || this.list == null) {
return false;
}
return this.list.contains(resource);
}
/* Utils */
public static String listToJSONString(List<String> list) {
if (list == null) {
return null;
}
JSONArray items = new JSONArray();
for (String item : list) {
items.put(item);
}
return items.toString(4);
}
private static List<String> listFromJSONString(String jsonString) {
if (jsonString == null) {
return null;
}
JSONArray jsonList = new JSONArray(jsonString);
List<String> resourceList = new ArrayList<>();
for (int i=0; i<jsonList.length(); i++) {
String item = (String)jsonList.get(i);
resourceList.add(item);
}
return resourceList;
}
public String getJSONString() {
return ResourceList.listToJSONString(this.list);
}
}

View File

@@ -0,0 +1,95 @@
package org.qortal.list;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import java.io.IOException;
import java.util.List;
public class ResourceListManager {
private static final Logger LOGGER = LogManager.getLogger(ResourceListManager.class);
private static ResourceListManager instance;
private ResourceList addressBlacklist;
public ResourceListManager() {
try {
this.addressBlacklist = new ResourceList("blacklist", "address");
} catch (IOException e) {
LOGGER.info("Error while loading address blacklist. Blocking is currently unavailable.");
}
}
public static synchronized ResourceListManager getInstance() {
if (instance == null) {
instance = new ResourceListManager();
}
return instance;
}
public boolean addAddressToBlacklist(String address, boolean save) {
try {
this.addressBlacklist.add(address);
if (save) {
this.addressBlacklist.save();
}
return true;
} catch (IllegalStateException | IOException e) {
LOGGER.info("Unable to add address to blacklist", e);
return false;
}
}
public boolean removeAddressFromBlacklist(String address, boolean save) {
try {
this.addressBlacklist.remove(address);
if (save) {
this.addressBlacklist.save();
}
return true;
} catch (IllegalStateException | IOException e) {
LOGGER.info("Unable to remove address from blacklist", e);
return false;
}
}
public boolean isAddressInBlacklist(String address) {
if (this.addressBlacklist == null) {
return false;
}
return this.addressBlacklist.contains(address);
}
public void saveBlacklist() {
if (this.addressBlacklist == null) {
return;
}
try {
this.addressBlacklist.save();
} catch (IOException e) {
LOGGER.info("Unable to save blacklist - reverting back to last saved state");
this.addressBlacklist.revert();
}
}
public void revertBlacklist() {
if (this.addressBlacklist == null) {
return;
}
this.addressBlacklist.revert();
}
public String getBlacklistJSONString() {
if (this.addressBlacklist == null) {
return null;
}
return this.addressBlacklist.getJSONString();
}
}

View File

@@ -4,7 +4,6 @@ import java.util.Arrays;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -51,7 +50,7 @@ public enum Handshake {
String versionString = helloMessage.getVersionString();
Matcher matcher = VERSION_PATTERN.matcher(versionString);
Matcher matcher = peer.VERSION_PATTERN.matcher(versionString);
if (!matcher.lookingAt()) {
LOGGER.debug(() -> String.format("Peer %s sent invalid HELLO version string '%s'", peer, versionString));
return null;
@@ -72,6 +71,15 @@ public enum Handshake {
peer.setPeersConnectionTimestamp(peersConnectionTimestamp);
peer.setPeersVersion(versionString, version);
if (Settings.getInstance().getAllowConnectionsWithOlderPeerVersions() == false) {
// Ensure the peer is running at least the minimum version allowed for connections
final String minPeerVersion = Settings.getInstance().getMinPeerVersion();
if (peer.isAtLeastVersion(minPeerVersion) == false) {
LOGGER.debug(String.format("Ignoring peer %s because it is on an old version (%s)", peer, versionString));
return null;
}
}
return CHALLENGE;
}
@@ -244,8 +252,6 @@ public enum Handshake {
/** Maximum allowed difference between peer's reported timestamp and when they connected, in milliseconds. */
private static final long MAX_TIMESTAMP_DELTA = 30 * 1000L; // ms
private static final Pattern VERSION_PATTERN = Pattern.compile(Controller.VERSION_PREFIX + "(\\d{1,3})\\.(\\d{1,5})\\.(\\d{1,5})");
private static final long PEER_VERSION_131 = 0x0100030001L;
private static final int POW_BUFFER_SIZE_PRE_131 = 8 * 1024 * 1024; // bytes

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@ import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.transform.Transformer;
@@ -25,7 +26,7 @@ public class GetOnlineAccountsMessage extends Message {
private GetOnlineAccountsMessage(int id, List<OnlineAccountData> onlineAccounts) {
super(id, MessageType.GET_ONLINE_ACCOUNTS);
this.onlineAccounts = onlineAccounts;
this.onlineAccounts = onlineAccounts.stream().limit(MAX_ACCOUNT_COUNT).collect(Collectors.toList());
}
public List<OnlineAccountData> getOnlineAccounts() {
@@ -35,12 +36,9 @@ public class GetOnlineAccountsMessage extends Message {
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
final int accountCount = bytes.getInt();
if (accountCount > MAX_ACCOUNT_COUNT)
return null;
List<OnlineAccountData> onlineAccounts = new ArrayList<>(accountCount);
for (int i = 0; i < accountCount; ++i) {
for (int i = 0; i < Math.min(MAX_ACCOUNT_COUNT, accountCount); ++i) {
long timestamp = bytes.getLong();
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];

View File

@@ -6,6 +6,7 @@ import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
import java.util.stream.Collectors;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.transform.Transformer;
@@ -25,7 +26,7 @@ public class OnlineAccountsMessage extends Message {
private OnlineAccountsMessage(int id, List<OnlineAccountData> onlineAccounts) {
super(id, MessageType.ONLINE_ACCOUNTS);
this.onlineAccounts = onlineAccounts;
this.onlineAccounts = onlineAccounts.stream().limit(MAX_ACCOUNT_COUNT).collect(Collectors.toList());
}
public List<OnlineAccountData> getOnlineAccounts() {
@@ -35,12 +36,9 @@ public class OnlineAccountsMessage extends Message {
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
final int accountCount = bytes.getInt();
if (accountCount > MAX_ACCOUNT_COUNT)
return null;
List<OnlineAccountData> onlineAccounts = new ArrayList<>(accountCount);
for (int i = 0; i < accountCount; ++i) {
for (int i = 0; i < Math.min(MAX_ACCOUNT_COUNT, accountCount); ++i) {
long timestamp = bytes.getLong();
byte[] signature = new byte[Transformer.SIGNATURE_LENGTH];

View File

@@ -98,12 +98,12 @@ public interface ATRepository {
*/
public List<ATStateData> getMatchingFinalATStatesQuorum(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue,
int minimumCount, long minimumPeriod) throws DataException;
int minimumCount, int maximumCount, long minimumPeriod) throws DataException;
/**
* Returns all ATStateData for a given block height.
* <p>
* Unlike <tt>getATState</tt>, only returns ATStateData saved at the given height.
* Unlike <tt>getATState</tt>, only returns <i>partial</i> ATStateData saved at the given height.
*
* @param height
* - block height
@@ -112,6 +112,11 @@ public interface ATRepository {
*/
public List<ATStateData> getBlockATStatesAtHeight(int height) throws DataException;
/** Rebuild the latest AT states cache, necessary for AT state trimming/pruning. */
public void rebuildLatestAtStates() throws DataException;
/** Returns height of first trimmable AT state. */
public int getAtTrimHeight() throws DataException;
@@ -121,12 +126,23 @@ public interface ATRepository {
*/
public void setAtTrimHeight(int trimHeight) throws DataException;
/** Hook to allow repository to prepare/cache info for AT state trimming. */
public void prepareForAtStateTrimming() throws DataException;
/** Trims full AT state data between passed heights. Returns number of trimmed rows. */
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException;
/** Returns height of first prunable AT state. */
public int getAtPruneHeight() throws DataException;
/** Sets new base height for AT state pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setAtPruneHeight(int pruneHeight) throws DataException;
/** Prunes full AT state data between passed heights. Returns number of pruned rows. */
public int pruneAtStates(int minHeight, int maxHeight) throws DataException;
/**
* Save ATStateData into repository.
* <p>

View File

@@ -166,6 +166,20 @@ public interface BlockRepository {
*/
public BlockData getDetachedBlockSignature(int startHeight) throws DataException;
/** Returns height of first prunable block. */
public int getBlockPruneHeight() throws DataException;
/** Sets new base height for block pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setBlockPruneHeight(int pruneHeight) throws DataException;
/** Prunes full block data between passed heights. Returns number of pruned rows. */
public int pruneBlocks(int minHeight, int maxHeight) throws DataException;
/**
* Saves block into repository.
*

View File

@@ -49,7 +49,7 @@ public interface Repository extends AutoCloseable {
public void performPeriodicMaintenance() throws DataException;
public void exportNodeLocalData(boolean keepArchivedCopy) throws DataException;
public void exportNodeLocalData() throws DataException;
public void importDataFromFile(String filename) throws DataException;

View File

@@ -1,8 +1,14 @@
package org.qortal.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.repository.hsqldb.HSQLDBDatabasePruning;
import org.qortal.settings.Settings;
import java.sql.SQLException;
public abstract class RepositoryManager {
private static final Logger LOGGER = LogManager.getLogger(RepositoryManager.class);
private static RepositoryFactory repositoryFactory = null;
@@ -51,6 +57,24 @@ public abstract class RepositoryManager {
}
}
public static void prune() {
// Bulk prune the database the first time we use pruning mode
if (Settings.getInstance().isPruningEnabled()) {
try {
boolean prunedATStates = HSQLDBDatabasePruning.pruneATStates();
boolean prunedBlocks = HSQLDBDatabasePruning.pruneBlocks();
// Perform repository maintenance to shrink the db size down
if (prunedATStates && prunedBlocks) {
HSQLDBDatabasePruning.performMaintenance();
}
} catch (SQLException | DataException e) {
LOGGER.info("Unable to bulk prune AT states. The database may have been left in an inconsistent state.");
}
}
}
public static void setRequestedCheckpoint(Boolean quick) {
quickCheckpointRequested = quick;
}

View File

@@ -8,6 +8,7 @@ import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.repository.ATRepository;
@@ -32,7 +33,7 @@ public class HSQLDBATRepository implements ATRepository {
public ATData fromATAddress(String atAddress) throws DataException {
String sql = "SELECT creator, created_when, version, asset_id, code_bytes, code_hash, "
+ "is_sleeping, sleep_until_height, is_finished, had_fatal_error, "
+ "is_frozen, frozen_balance "
+ "is_frozen, frozen_balance, sleep_until_message_timestamp "
+ "FROM ATs "
+ "WHERE AT_address = ? LIMIT 1";
@@ -60,8 +61,13 @@ public class HSQLDBATRepository implements ATRepository {
if (frozenBalance == 0 && resultSet.wasNull())
frozenBalance = null;
Long sleepUntilMessageTimestamp = resultSet.getLong(13);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
return new ATData(atAddress, creatorPublicKey, created, version, assetId, codeBytes, codeHash,
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance);
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance,
sleepUntilMessageTimestamp);
} catch (SQLException e) {
throw new DataException("Unable to fetch AT from repository", e);
}
@@ -94,7 +100,7 @@ public class HSQLDBATRepository implements ATRepository {
public List<ATData> getAllExecutableATs() throws DataException {
String sql = "SELECT AT_address, creator, created_when, version, asset_id, code_bytes, code_hash, "
+ "is_sleeping, sleep_until_height, had_fatal_error, "
+ "is_frozen, frozen_balance "
+ "is_frozen, frozen_balance, sleep_until_message_timestamp "
+ "FROM ATs "
+ "WHERE is_finished = false "
+ "ORDER BY created_when ASC";
@@ -128,8 +134,13 @@ public class HSQLDBATRepository implements ATRepository {
if (frozenBalance == 0 && resultSet.wasNull())
frozenBalance = null;
Long sleepUntilMessageTimestamp = resultSet.getLong(13);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
ATData atData = new ATData(atAddress, creatorPublicKey, created, version, assetId, codeBytes, codeHash,
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance);
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance,
sleepUntilMessageTimestamp);
executableATs.add(atData);
} while (resultSet.next());
@@ -147,7 +158,7 @@ public class HSQLDBATRepository implements ATRepository {
sql.append("SELECT AT_address, creator, created_when, version, asset_id, code_bytes, ")
.append("is_sleeping, sleep_until_height, is_finished, had_fatal_error, ")
.append("is_frozen, frozen_balance ")
.append("is_frozen, frozen_balance, sleep_until_message_timestamp ")
.append("FROM ATs ")
.append("WHERE code_hash = ? ");
bindParams.add(codeHash);
@@ -191,8 +202,13 @@ public class HSQLDBATRepository implements ATRepository {
if (frozenBalance == 0 && resultSet.wasNull())
frozenBalance = null;
Long sleepUntilMessageTimestamp = resultSet.getLong(13);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
ATData atData = new ATData(atAddress, creatorPublicKey, created, version, assetId, codeBytes, codeHash,
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance);
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance,
sleepUntilMessageTimestamp);
matchingATs.add(atData);
} while (resultSet.next());
@@ -210,7 +226,7 @@ public class HSQLDBATRepository implements ATRepository {
sql.append("SELECT AT_address, creator, created_when, version, asset_id, code_bytes, ")
.append("is_sleeping, sleep_until_height, is_finished, had_fatal_error, ")
.append("is_frozen, frozen_balance, code_hash ")
.append("is_frozen, frozen_balance, code_hash, sleep_until_message_timestamp ")
.append("FROM ");
// (VALUES (?), (?), ...) AS ATCodeHashes (code_hash)
@@ -264,9 +280,10 @@ public class HSQLDBATRepository implements ATRepository {
frozenBalance = null;
byte[] codeHash = resultSet.getBytes(13);
Long sleepUntilMessageTimestamp = resultSet.getLong(14);
ATData atData = new ATData(atAddress, creatorPublicKey, created, version, assetId, codeBytes, codeHash,
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance);
isSleeping, sleepUntilHeight, isFinished, hadFatalError, isFrozen, frozenBalance, sleepUntilMessageTimestamp);
matchingATs.add(atData);
} while (resultSet.next());
@@ -305,7 +322,7 @@ public class HSQLDBATRepository implements ATRepository {
.bind("code_bytes", atData.getCodeBytes()).bind("code_hash", atData.getCodeHash())
.bind("is_sleeping", atData.getIsSleeping()).bind("sleep_until_height", atData.getSleepUntilHeight())
.bind("is_finished", atData.getIsFinished()).bind("had_fatal_error", atData.getHadFatalError()).bind("is_frozen", atData.getIsFrozen())
.bind("frozen_balance", atData.getFrozenBalance());
.bind("frozen_balance", atData.getFrozenBalance()).bind("sleep_until_message_timestamp", atData.getSleepUntilMessageTimestamp());
try {
saveHelper.execute(this.repository);
@@ -328,7 +345,7 @@ public class HSQLDBATRepository implements ATRepository {
@Override
public ATStateData getATStateAtHeight(String atAddress, int height) throws DataException {
String sql = "SELECT state_data, state_hash, fees, is_initial "
String sql = "SELECT state_data, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "LEFT OUTER JOIN ATStatesData USING (AT_address, height) "
+ "WHERE ATStates.AT_address = ? AND ATStates.height = ? "
@@ -343,7 +360,11 @@ public class HSQLDBATRepository implements ATRepository {
long fees = resultSet.getLong(3);
boolean isInitial = resultSet.getBoolean(4);
return new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial);
Long sleepUntilMessageTimestamp = resultSet.getLong(5);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
return new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial, sleepUntilMessageTimestamp);
} catch (SQLException e) {
throw new DataException("Unable to fetch AT state from repository", e);
}
@@ -351,7 +372,7 @@ public class HSQLDBATRepository implements ATRepository {
@Override
public ATStateData getLatestATState(String atAddress) throws DataException {
String sql = "SELECT height, state_data, state_hash, fees, is_initial "
String sql = "SELECT height, state_data, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "JOIN ATStatesData USING (AT_address, height) "
+ "WHERE ATStates.AT_address = ? "
@@ -370,7 +391,11 @@ public class HSQLDBATRepository implements ATRepository {
long fees = resultSet.getLong(4);
boolean isInitial = resultSet.getBoolean(5);
return new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial);
Long sleepUntilMessageTimestamp = resultSet.getLong(6);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
return new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial, sleepUntilMessageTimestamp);
} catch (SQLException e) {
throw new DataException("Unable to fetch latest AT state from repository", e);
}
@@ -383,10 +408,10 @@ public class HSQLDBATRepository implements ATRepository {
StringBuilder sql = new StringBuilder(1024);
List<Object> bindParams = new ArrayList<>();
sql.append("SELECT AT_address, height, state_data, state_hash, fees, is_initial "
sql.append("SELECT AT_address, height, state_data, state_hash, fees, is_initial, FinalATStates.sleep_until_message_timestamp "
+ "FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height, state_data, state_hash, fees, is_initial "
+ "SELECT height, state_data, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "JOIN ATStatesData USING (AT_address, height) "
+ "WHERE ATStates.AT_address = ATs.AT_address ");
@@ -440,7 +465,11 @@ public class HSQLDBATRepository implements ATRepository {
long fees = resultSet.getLong(5);
boolean isInitial = resultSet.getBoolean(6);
ATStateData atStateData = new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial);
Long sleepUntilMessageTimestamp = resultSet.getLong(7);
if (sleepUntilMessageTimestamp == 0 && resultSet.wasNull())
sleepUntilMessageTimestamp = null;
ATStateData atStateData = new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial, sleepUntilMessageTimestamp);
atStates.add(atStateData);
} while (resultSet.next());
@@ -454,7 +483,7 @@ public class HSQLDBATRepository implements ATRepository {
@Override
public List<ATStateData> getMatchingFinalATStatesQuorum(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue,
int minimumCount, long minimumPeriod) throws DataException {
int minimumCount, int maximumCount, long minimumPeriod) throws DataException {
// We need most recent entry first so we can use its timestamp to slice further results
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, isFinished,
dataByteOffset, expectedValue, null,
@@ -471,7 +500,7 @@ public class HSQLDBATRepository implements ATRepository {
StringBuilder sql = new StringBuilder(1024);
List<Object> bindParams = new ArrayList<>();
sql.append("SELECT AT_address, height, state_data, state_hash, fees, is_initial "
sql.append("SELECT AT_address, height, state_data, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height, state_data, state_hash, fees, is_initial "
@@ -510,7 +539,8 @@ public class HSQLDBATRepository implements ATRepository {
bindParams.add(minimumHeight);
bindParams.add(minimumCount);
sql.append("ORDER BY FinalATStates.height DESC");
sql.append("ORDER BY FinalATStates.height DESC LIMIT ?");
bindParams.add(maximumCount);
List<ATStateData> atStates = new ArrayList<>();
@@ -525,8 +555,10 @@ public class HSQLDBATRepository implements ATRepository {
byte[] stateHash = resultSet.getBytes(4);
long fees = resultSet.getLong(5);
boolean isInitial = resultSet.getBoolean(6);
Long sleepUntilMessageTimestamp = resultSet.getLong(7);
ATStateData atStateData = new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial);
ATStateData atStateData = new ATStateData(atAddress, height, stateData, stateHash, fees, isInitial,
sleepUntilMessageTimestamp);
atStates.add(atStateData);
} while (resultSet.next());
@@ -541,9 +573,9 @@ public class HSQLDBATRepository implements ATRepository {
public List<ATStateData> getBlockATStatesAtHeight(int height) throws DataException {
String sql = "SELECT AT_address, state_hash, fees, is_initial "
+ "FROM ATs "
+ "LEFT OUTER JOIN ATStates "
+ "ON ATStates.AT_address = ATs.AT_address AND height = ? "
+ "WHERE ATStates.AT_address IS NOT NULL "
+ "JOIN ATStates "
+ "ON ATStates.AT_address = ATs.AT_address "
+ "WHERE height = ? "
+ "ORDER BY created_when ASC";
List<ATStateData> atStates = new ArrayList<>();
@@ -569,6 +601,35 @@ public class HSQLDBATRepository implements ATRepository {
return atStates;
}
@Override
public void rebuildLatestAtStates() throws DataException {
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
}
@Override
public int getAtTrimHeight() throws DataException {
String sql = "SELECT AT_trim_height FROM DatabaseInfo";
@@ -600,33 +661,6 @@ public class HSQLDBATRepository implements ATRepository {
}
}
@Override
public void prepareForAtStateTrimming() throws DataException {
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
}
@Override
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException {
if (minHeight >= maxHeight)
@@ -651,6 +685,94 @@ public class HSQLDBATRepository implements ATRepository {
}
}
@Override
public int getAtPruneHeight() throws DataException {
String sql = "SELECT AT_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch AT state prune height from repository", e);
}
}
@Override
public void setAtPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET AT_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set AT state prune height in repository", e);
}
}
}
@Override
public int pruneAtStates(int minHeight, int maxHeight) throws DataException {
int deletedCount = 0;
for (int height=minHeight; height<maxHeight; height++) {
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
// Get latest AT states for this height
List<String> atAddresses = new ArrayList<>();
String updateSql = "SELECT AT_address FROM LatestATStates WHERE height = ?";
try (ResultSet resultSet = this.repository.checkedExecute(updateSql, height)) {
if (resultSet != null) {
do {
String atAddress = resultSet.getString(1);
atAddresses.add(atAddress);
} while (resultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to fetch flagged accounts from repository", e);
}
List<ATStateData> atStates = this.getBlockATStatesAtHeight(height);
for (ATStateData atState : atStates) {
//LOGGER.info("Found atState {} at height {}", atState.getATAddress(), atState.getHeight());
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
if (atAddresses.contains(atState.getATAddress())) {
// We don't want to delete this AT state because it is still active
LOGGER.info("Skipping atState {} at height {}", atState.getATAddress(), atState.getHeight());
continue;
}
// Safe to delete everything else for this height
try {
this.repository.delete("ATStates", "AT_address = ? AND height = ?",
atState.getATAddress(), atState.getHeight());
deletedCount++;
} catch (SQLException e) {
throw new DataException("Unable to delete AT state data from repository", e);
}
}
}
return deletedCount;
}
@Override
public void save(ATStateData atStateData) throws DataException {
// We shouldn't ever save partial ATStateData
@@ -661,7 +783,8 @@ public class HSQLDBATRepository implements ATRepository {
atStatesSaver.bind("AT_address", atStateData.getATAddress()).bind("height", atStateData.getHeight())
.bind("state_hash", atStateData.getStateHash())
.bind("fees", atStateData.getFees()).bind("is_initial", atStateData.isInitial());
.bind("fees", atStateData.getFees()).bind("is_initial", atStateData.isInitial())
.bind("sleep_until_message_timestamp", atStateData.getSleepUntilMessageTimestamp());
try {
atStatesSaver.execute(this.repository);

View File

@@ -509,6 +509,53 @@ public class HSQLDBBlockRepository implements BlockRepository {
}
}
@Override
public int getBlockPruneHeight() throws DataException {
String sql = "SELECT block_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch block prune height from repository", e);
}
}
@Override
public void setBlockPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET block_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set block prune height in repository", e);
}
}
}
@Override
public int pruneBlocks(int minHeight, int maxHeight) throws DataException {
// Don't prune the genesis block
if (minHeight <= 1) {
minHeight = 2;
}
try {
return this.repository.delete("Blocks", "height BETWEEN ? AND ?", minHeight, maxHeight);
} catch (SQLException e) {
throw new DataException("Unable to prune blocks from repository", e);
}
}
@Override
public BlockData getDetachedBlockSignature(int startHeight) throws DataException {
String sql = "SELECT " + BLOCK_DB_COLUMNS + " FROM Blocks "

View File

@@ -0,0 +1,282 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import java.sql.ResultSet;
import java.sql.SQLException;
/**
*
* When switching from a full node to a pruning node, we need to delete most of the database contents.
* If we do this entirely as a background process, it is very slow and can interfere with syncing.
* However, if we take the approach of transferring only the necessary rows to a new table and then
* deleting the original table, this makes the process much faster. It was taking several days to
* delete the AT states in the background, but only a couple of minutes to copy them to a new table.
*
* The trade off is that we have to go through a form of "reshape" when starting the app for the first
* time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be
* a problem.
*
* Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to
* shrink the database file size down to a fraction of what it was before.
*
* From this point, the original background process will run, but can be dialled right down so not
* to interfere with syncing.
*
*/
public class HSQLDBDatabasePruning {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBDatabasePruning.class);
public static boolean pruneATStates() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository)RepositoryManager.getRepository()) {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getATRepository().getAtPruneHeight();
if (pruneHeight > 0) {
// Already pruned AT states
return false;
}
LOGGER.info("Starting bulk prune of AT states - this process could take a while... (approx. 2 mins on high spec)");
// Create new AT-states table to hold smaller dataset
repository.executeCheckedUpdate("DROP TABLE IF EXISTS ATStatesNew");
repository.executeCheckedUpdate("CREATE TABLE ATStatesNew ("
+ "AT_address QortalAddress, height INTEGER NOT NULL, state_hash ATStateHash NOT NULL, "
+ "fees QortalAmount NOT NULL, is_initial BOOLEAN NOT NULL, sleep_until_message_timestamp BIGINT, "
+ "PRIMARY KEY (AT_address, height), "
+ "FOREIGN KEY (AT_address) REFERENCES ATs (AT_address) ON DELETE CASCADE)");
repository.executeCheckedUpdate("SET TABLE ATStatesNew NEW SPACE");
repository.executeCheckedUpdate("CHECKPOINT");
// Find our latest block
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
// Calculate some constants for later use
final int blockchainHeight = latestBlock.getHeight();
final int maximumBlockToTrim = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
final int startHeight = maximumBlockToTrim;
final int endHeight = blockchainHeight;
final int blockStep = 10000;
// Loop through all the LatestATStates and copy them to the new table
LOGGER.info("Copying AT states...");
for (int height = 0; height < endHeight; height += blockStep) {
//LOGGER.info(String.format("Copying AT states between %d and %d...", height, height + blockStep - 1));
String sql = "SELECT height, AT_address FROM LatestATStates WHERE height BETWEEN ? AND ?";
try (ResultSet latestAtStatesResultSet = repository.checkedExecute(sql, height, height + blockStep - 1)) {
if (latestAtStatesResultSet != null) {
do {
int latestAtHeight = latestAtStatesResultSet.getInt(1);
String latestAtAddress = latestAtStatesResultSet.getString(2);
// Copy this latest ATState to the new table
//LOGGER.info(String.format("Copying AT %s at height %d...", latestAtAddress, latestAtHeight));
try {
String updateSql = "INSERT INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, latestAtHeight, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
if (height >= startHeight) {
// Now copy this AT states for each recent block it is present in
for (int i = startHeight; i < endHeight; i++) {
if (latestAtHeight < i) {
// This AT finished before this block so there is nothing to copy
continue;
}
//LOGGER.info(String.format("Copying recent AT %s at height %d...", latestAtAddress, i));
try {
// Copy each LatestATState to the new table
String updateSql = "INSERT IGNORE INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, i, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
}
}
} while (latestAtStatesResultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to copy AT states", e);
}
}
repository.saveChanges();
// Add a height index
LOGGER.info("Rebuilding AT states height index in repository");
repository.executeCheckedUpdate("CREATE INDEX IF NOT EXISTS ATStatesHeightIndex ON ATStatesNew (height)");
repository.executeCheckedUpdate("CHECKPOINT");
// Finally, drop the original table and rename
LOGGER.info("Deleting old AT states...");
repository.executeCheckedUpdate("DROP TABLE ATStates");
repository.executeCheckedUpdate("ALTER TABLE ATStatesNew RENAME TO ATStates");
repository.executeCheckedUpdate("CHECKPOINT");
// Update the prune height
repository.getATRepository().setAtPruneHeight(maximumBlockToTrim);
repository.saveChanges();
repository.executeCheckedUpdate("CHECKPOINT");
// Now prune/trim the ATStatesData, as this currently goes back over a month
return HSQLDBDatabasePruning.pruneATStateData();
}
}
/*
* Bulk prune ATStatesData to catch up with the now pruned ATStates table
* This uses the existing AT States trimming code but with a much higher end block
*/
private static boolean pruneATStateData() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk ATStatesData pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
final int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
// ATStateData is already trimmed - so carry on from where we left off in the past
int pruneStartHeight = repository.getATRepository().getAtTrimHeight();
LOGGER.info("Starting bulk prune of AT states data - this process could take a while... (approx. 3 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all AT state data up until our latest minus pruneBlockLimit
if (Controller.isStopping()) {
return false;
}
// Override batch size in the settings because this is a one-off process
final int batchSize = 1000;
final int rowLimitPerBatch = 50000;
int upperBatchHeight = pruneStartHeight + batchSize;
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.trace(String.format("Pruning AT states data between %d and %d...", pruneStartHeight, upperPruneHeight));
int numATStatesPruned = repository.getATRepository().trimAtStates(pruneStartHeight, upperPruneHeight, rowLimitPerBatch);
repository.saveChanges();
if (numATStatesPruned > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.trace(() -> String.format("Pruned %d AT states data rows between blocks %d and %d",
numATStatesPruned, finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getATRepository().setAtTrimHeight(pruneStartHeight);
// No need to rebuild the latest AT states as we aren't currently synchronizing
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping AT states trim height to %d", finalPruneStartHeight));
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
}
public static boolean pruneBlocks() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getBlockRepository().getBlockPruneHeight();
if (pruneHeight > 0) {
// Already pruned blocks
return false;
}
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
final int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
int pruneStartHeight = 0;
LOGGER.info("Starting bulk prune of blocks - this process could take a while... (approx. 5 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all blocks up until our latest minus pruneBlockLimit
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.info(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.info(() -> String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getBlockRepository().setBlockPruneHeight(pruneStartHeight);
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping block base prune height to %d", finalPruneStartHeight));
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
}
public static void performMaintenance() throws SQLException, DataException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
repository.performPeriodicMaintenance();
}
}
}

View File

@@ -699,7 +699,7 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("CHECKPOINT");
break;
case 30:
case 30: {
// Split AT state data off to new table for better performance/management.
if (!wasPristine && !"mem".equals(HSQLDBRepository.getDbPathname(connection.getMetaData().getURL()))) {
@@ -774,6 +774,7 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("ALTER TABLE ATStatesNew RENAME TO ATStates");
stmt.execute("CHECKPOINT");
break;
}
case 31:
// Fix latest AT state cache which was previous created as TEMPORARY
@@ -822,6 +823,56 @@ public class HSQLDBDatabaseUpdates {
+ "timestamp_signature Signature NOT NULL, " + TRANSACTION_KEYS + ")");
break;
case 34: {
// AT sleep-until-message support
LOGGER.info("Altering AT table in repository - this might take a while... (approx. 20 seconds on high-spec)");
stmt.execute("ALTER TABLE ATs ADD sleep_until_message_timestamp BIGINT");
// Create new AT-states table with new column
stmt.execute("CREATE TABLE ATStatesNew ("
+ "AT_address QortalAddress, height INTEGER NOT NULL, state_hash ATStateHash NOT NULL, "
+ "fees QortalAmount NOT NULL, is_initial BOOLEAN NOT NULL, sleep_until_message_timestamp BIGINT, "
+ "PRIMARY KEY (AT_address, height), "
+ "FOREIGN KEY (AT_address) REFERENCES ATs (AT_address) ON DELETE CASCADE)");
stmt.execute("SET TABLE ATStatesNew NEW SPACE");
stmt.execute("CHECKPOINT");
// Add the height index
LOGGER.info("Adding index to AT states table...");
stmt.execute("CREATE INDEX ATStatesNewHeightIndex ON ATStatesNew (height)");
stmt.execute("CHECKPOINT");
ResultSet resultSet = stmt.executeQuery("SELECT height FROM Blocks ORDER BY height DESC LIMIT 1");
final int blockchainHeight = resultSet.next() ? resultSet.getInt(1) : 0;
final int heightStep = 100;
LOGGER.info("Altering AT states table in repository - this might take a while... (approx. 3 mins on high-spec)");
for (int minHeight = 1; minHeight < blockchainHeight; minHeight += heightStep) {
stmt.execute("INSERT INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, NULL "
+ "FROM ATStates "
+ "WHERE height BETWEEN " + minHeight + " AND " + (minHeight + heightStep - 1)
+ ")");
stmt.execute("COMMIT");
int processed = Math.min(minHeight + heightStep - 1, blockchainHeight);
double percentage = (double)processed / (double)blockchainHeight * 100.0f;
LOGGER.info(String.format("Processed %d of %d blocks (%.1f%%)", processed, blockchainHeight, percentage));
}
stmt.execute("CHECKPOINT");
stmt.execute("DROP TABLE ATStates");
stmt.execute("ALTER TABLE ATStatesNew RENAME TO ATStates");
stmt.execute("ALTER INDEX ATStatesNewHeightIndex RENAME TO ATStatesHeightIndex");
stmt.execute("CHECKPOINT");
break;
}
case 35:
// Support for pruning
stmt.execute("ALTER TABLE DatabaseInfo ADD AT_prune_height INT NOT NULL DEFAULT 0");
stmt.execute("ALTER TABLE DatabaseInfo ADD block_prune_height INT NOT NULL DEFAULT 0");
break;
default:
// nothing to do
return false;

View File

@@ -2,6 +2,7 @@ package org.qortal.repository.hsqldb;
import java.awt.TrayIcon.MessageType;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.math.BigDecimal;
import java.nio.file.Files;
@@ -15,23 +16,19 @@ import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Savepoint;
import java.sql.Statement;
import java.util.ArrayDeque;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Collections;
import java.util.Comparator;
import java.util.Deque;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Stream;
import org.json.JSONArray;
import org.json.JSONObject;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
import org.qortal.repository.ATRepository;
@@ -52,13 +49,13 @@ import org.qortal.repository.TransactionRepository;
import org.qortal.repository.VotingRepository;
import org.qortal.repository.hsqldb.transaction.HSQLDBTransactionRepository;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
import org.qortal.utils.Base58;
public class HSQLDBRepository implements Repository {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBRepository.class);
private static final Object CHECKPOINT_LOCK = new Object();
public static final Object CHECKPOINT_LOCK = new Object();
// "serialization failure"
private static final Integer DEADLOCK_ERROR_CODE = Integer.valueOf(-4861);
@@ -460,8 +457,7 @@ public class HSQLDBRepository implements Repository {
}
@Override
public void exportNodeLocalData(boolean keepArchivedCopy) throws DataException {
public void exportNodeLocalData() throws DataException {
// Create the qortal-backup folder if it doesn't exist
Path backupPath = Paths.get("qortal-backup");
try {
@@ -471,52 +467,59 @@ public class HSQLDBRepository implements Repository {
throw new DataException("Unable to create backup folder");
}
// We need to rename or delete an existing TradeBotStates backup before creating a new one
File tradeBotStatesBackupFile = new File("qortal-backup/TradeBotStates.script");
if (tradeBotStatesBackupFile.exists()) {
if (keepArchivedCopy) {
// Rename existing TradeBotStates backup, to make sure that we're not overwriting any keys
File archivedBackupFile = new File(String.format("qortal-backup/TradeBotStates-archive-%d.script", NTP.getTime()));
if (tradeBotStatesBackupFile.renameTo(archivedBackupFile))
LOGGER.info(String.format("Moved existing TradeBotStates backup file to %s", archivedBackupFile.getPath()));
else
throw new DataException("Unable to rename existing TradeBotStates backup");
} else {
// Delete existing copy
LOGGER.info("Deleting existing TradeBotStates backup because it is being replaced with a new one");
tradeBotStatesBackupFile.delete();
try {
// Load trade bot data
List<TradeBotData> allTradeBotData = this.getCrossChainRepository().getAllTradeBotData();
JSONArray allTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
allTradeBotDataJson.put(tradeBotDataJson);
}
}
// There's currently no need to take an archived copy of the MintingAccounts data - just delete the old one if it exists
File mintingAccountsBackupFile = new File("qortal-backup/MintingAccounts.script");
if (mintingAccountsBackupFile.exists()) {
LOGGER.info("Deleting existing MintingAccounts backup because it is being replaced with a new one");
mintingAccountsBackupFile.delete();
}
// We need to combine existing TradeBotStates data before overwriting
String fileName = "qortal-backup/TradeBotStates.json";
File tradeBotStatesBackupFile = new File(fileName);
if (tradeBotStatesBackupFile.exists()) {
String jsonString = new String(Files.readAllBytes(Paths.get(fileName)));
JSONArray allExistingTradeBotData = new JSONArray(jsonString);
Iterator<Object> iterator = allExistingTradeBotData.iterator();
while(iterator.hasNext()) {
JSONObject existingTradeBotData = (JSONObject)iterator.next();
String existingTradePrivateKey = (String) existingTradeBotData.get("tradePrivateKey");
// Check if we already have an entry for this trade
boolean found = allTradeBotData.stream().anyMatch(tradeBotData -> Base58.encode(tradeBotData.getTradePrivateKey()).equals(existingTradePrivateKey));
if (found == false)
// We need to add this to our list
allTradeBotDataJson.put(existingTradeBotData);
}
}
try (Statement stmt = this.connection.createStatement()) {
stmt.execute("PERFORM EXPORT SCRIPT FOR TABLE MintingAccounts DATA TO 'qortal-backup/MintingAccounts.script'");
stmt.execute("PERFORM EXPORT SCRIPT FOR TABLE TradeBotStates DATA TO 'qortal-backup/TradeBotStates.script'");
LOGGER.info("Exported sensitive/node-local data: minting keys and trade bot states");
} catch (SQLException e) {
throw new DataException("Unable to export sensitive/node-local data from repository");
FileWriter writer = new FileWriter(fileName);
writer.write(allTradeBotDataJson.toString());
writer.close();
LOGGER.info("Exported sensitive/node-local data: trade bot states");
} catch (DataException | IOException e) {
throw new DataException("Unable to export trade bot states from repository");
}
}
@Override
public void importDataFromFile(String filename) throws DataException {
try (Statement stmt = this.connection.createStatement()) {
LOGGER.info(() -> String.format("Importing data into repository from %s", filename));
String escapedFilename = stmt.enquoteLiteral(filename);
stmt.execute("PERFORM IMPORT SCRIPT DATA FROM " + escapedFilename + " CONTINUE ON ERROR");
LOGGER.info(() -> String.format("Imported data into repository from %s", filename));
} catch (SQLException e) {
LOGGER.info(() -> String.format("Failed to import data into repository from %s: %s", filename, e.getMessage()));
throw new DataException("Unable to import sensitive/node-local data to repository: " + e.getMessage());
LOGGER.info(() -> String.format("Importing data into repository from %s", filename));
try {
String jsonString = new String(Files.readAllBytes(Paths.get(filename)));
JSONArray tradeBotDataToImport = new JSONArray(jsonString);
Iterator<Object> iterator = tradeBotDataToImport.iterator();
while(iterator.hasNext()) {
JSONObject tradeBotDataJson = (JSONObject)iterator.next();
TradeBotData tradeBotData = TradeBotData.fromJson(tradeBotDataJson);
this.getCrossChainRepository().save(tradeBotData);
}
} catch (IOException e) {
throw new DataException("Unable to import sensitive/node-local trade bot states to repository: " + e.getMessage());
}
LOGGER.info(() -> String.format("Imported trade bot states into repository from %s", filename));
}
@Override
@@ -700,8 +703,11 @@ public class HSQLDBRepository implements Repository {
private ResultSet checkedExecuteResultSet(PreparedStatement preparedStatement, Object... objects) throws SQLException {
bindStatementParams(preparedStatement, objects);
if (!preparedStatement.execute())
throw new SQLException("Fetching from database produced no results");
// synchronize to block new executions if checkpointing in progress
synchronized (CHECKPOINT_LOCK) {
if (!preparedStatement.execute())
throw new SQLException("Fetching from database produced no results");
}
ResultSet resultSet = preparedStatement.getResultSet();
if (resultSet == null)
@@ -1053,4 +1059,4 @@ public class HSQLDBRepository implements Repository {
return DEADLOCK_ERROR_CODE.equals(e.getErrorCode());
}
}
}

View File

@@ -61,13 +61,15 @@ public class HSQLDBSaver {
public boolean execute(HSQLDBRepository repository) throws SQLException {
String sql = this.formatInsertWithPlaceholders();
try {
PreparedStatement preparedStatement = repository.prepareStatement(sql);
this.bindValues(preparedStatement);
synchronized (HSQLDBRepository.CHECKPOINT_LOCK) {
try {
PreparedStatement preparedStatement = repository.prepareStatement(sql);
this.bindValues(preparedStatement);
return preparedStatement.execute();
} catch (SQLException e) {
throw repository.examineException(e);
return preparedStatement.execute();
} catch (SQLException e) {
throw repository.examineException(e);
}
}
}

View File

@@ -23,6 +23,7 @@ import org.eclipse.persistence.jaxb.UnmarshallerProperties;
import org.qortal.block.BlockChain;
import org.qortal.crosschain.Bitcoin.BitcoinNet;
import org.qortal.crosschain.Litecoin.LitecoinNet;
import org.qortal.crosschain.Dogecoin.DogecoinNet;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
@@ -108,6 +109,26 @@ public class Settings {
* This has a significant effect on execution time. */
private int onlineSignaturesTrimBatchSize = 100; // blocks
/** Whether we should prune old data to reduce database size
* This prevents the node from being able to serve older blocks */
private boolean pruningEnabled = false;
/** The amount of recent blocks we should keep when pruning */
private int pruneBlockLimit = 1450;
/** How often to attempt AT state pruning (ms). */
private long atStatesPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable AT states.<br>
* This has a significant effect on execution time. */
private int atStatesPruneBatchSize = 25; // blocks
/** How often to attempt block pruning (ms). */
private long blockPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable blocks.<br>
* This has a significant effect on execution time. */
private int blockPruneBatchSize = 10000; // blocks
// Peer-to-peer related
private boolean isTestNet = false;
/** Port number for inbound peer-to-peer connections. */
@@ -122,11 +143,26 @@ public class Settings {
private int maxNetworkThreadPoolSize = 20;
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
private int networkPoWComputePoolSize = 2;
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
private int maxRetries = 2;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "1.5.0";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */
private boolean allowConnectionsWithOlderPeerVersions = true;
/** Minimum time (in seconds) that we should attempt to remain connected to a peer for */
private int minPeerConnectionTime = 2 * 60; // seconds
/** Maximum time (in seconds) that we should attempt to remain connected to a peer for */
private int maxPeerConnectionTime = 20 * 60; // seconds
// Which blockchains this node is running
private String blockchainConfig = null; // use default from resources
private BitcoinNet bitcoinNet = BitcoinNet.MAIN;
private LitecoinNet litecoinNet = LitecoinNet.MAIN;
private DogecoinNet dogecoinNet = DogecoinNet.MAIN;
// Also crosschain-related:
/** Whether to show SysTray pop-up notifications when trade-bot entries change state */
private boolean tradebotSystrayEnabled = false;
@@ -145,6 +181,9 @@ public class Settings {
"https://raw.githubusercontent.com@151.101.16.133/Qortal/qortal/%s/qortal.update"
};
// Lists
private String listsPath = "lists";
/** Array of NTP server hostnames. */
private String[] ntpServers = new String[] {
"pool.ntp.org",
@@ -408,6 +447,16 @@ public class Settings {
return this.networkPoWComputePoolSize;
}
public int getMaxRetries() { return this.maxRetries; }
public String getMinPeerVersion() { return this.minPeerVersion; }
public boolean getAllowConnectionsWithOlderPeerVersions() { return this.allowConnectionsWithOlderPeerVersions; }
public int getMinPeerConnectionTime() { return this.minPeerConnectionTime; }
public int getMaxPeerConnectionTime() { return this.maxPeerConnectionTime; }
public String getBlockchainConfig() {
return this.blockchainConfig;
}
@@ -420,6 +469,10 @@ public class Settings {
return this.litecoinNet;
}
public DogecoinNet getDogecoinNet() {
return this.dogecoinNet;
}
public boolean isTradebotSystrayEnabled() {
return this.tradebotSystrayEnabled;
}
@@ -444,6 +497,10 @@ public class Settings {
return this.autoUpdateRepos;
}
public String getListsPath() {
return this.listsPath;
}
public String[] getNtpServers() {
return this.ntpServers;
}
@@ -492,4 +549,29 @@ public class Settings {
return this.onlineSignaturesTrimBatchSize;
}
public boolean isPruningEnabled() {
return this.pruningEnabled;
}
public int getPruneBlockLimit() {
return this.pruneBlockLimit;
}
public long getAtStatesPruneInterval() {
return this.atStatesPruneInterval;
}
public int getAtStatesPruneBatchSize() {
return this.atStatesPruneBatchSize;
}
public long getBlockPruneInterval() {
return this.blockPruneInterval;
}
public int getBlockPruneBatchSize() {
return this.blockPruneBatchSize;
}
}

View File

@@ -11,6 +11,7 @@ import org.qortal.crypto.MemoryPoW;
import org.qortal.data.transaction.ChatTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.group.Group;
import org.qortal.list.ResourceListManager;
import org.qortal.repository.DataException;
import org.qortal.repository.GroupRepository;
import org.qortal.repository.Repository;
@@ -138,6 +139,12 @@ public class ChatTransaction extends Transaction {
public ValidationResult isValid() throws DataException {
// Nonce checking is done via isSignatureValid() as that method is only called once per import
// Check for blacklisted author by address
ResourceListManager listManager = ResourceListManager.getInstance();
if (listManager.isAddressInBlacklist(this.chatTransactionData.getSender())) {
return ValidationResult.ADDRESS_IN_BLACKLIST;
}
// If we exist in the repository then we've been imported as unconfirmed,
// but we don't want to make it into a block, so return fake non-OK result.
if (this.repository.getTransactionRepository().exists(this.chatTransactionData.getSignature()))

View File

@@ -247,6 +247,7 @@ public abstract class Transaction {
INVALID_GROUP_BLOCK_DELAY(93),
INCORRECT_NONCE(94),
INVALID_TIMESTAMP_SIGNATURE(95),
ADDRESS_IN_BLACKLIST(96),
INVALID_BUT_OK(999),
NOT_YET_RELEASED(1000);

View File

@@ -1,86 +0,0 @@
package org.qortal.utils;
import java.util.ArrayList;
import java.util.List;
import org.qortal.globalization.BIP39WordList;
public class BIP39 {
private static final int BITS_PER_WORD = 11;
/** Convert BIP39 mnemonic to binary 'entropy' */
public static byte[] decode(String[] phraseWords, String lang) {
if (lang == null)
lang = "en";
List<String> wordList = BIP39WordList.INSTANCE.getByLang(lang);
if (wordList == null)
throw new IllegalStateException("BIP39 word list for lang '" + lang + "' unavailable");
byte[] entropy = new byte[(phraseWords.length * BITS_PER_WORD + 7) / 8];
int byteIndex = 0;
int bitShift = 3;
for (int i = 0; i < phraseWords.length; ++i) {
int wordListIndex = wordList.indexOf(phraseWords[i]);
if (wordListIndex == -1)
// Word not found
return null;
entropy[byteIndex++] |= (byte) (wordListIndex >> bitShift);
bitShift = 8 - bitShift;
if (bitShift >= 0) {
// Leftover fits inside one byte
entropy[byteIndex] |= (byte) ((wordListIndex << bitShift));
bitShift = BITS_PER_WORD - bitShift;
} else {
// Leftover spread over next two bytes
bitShift = - bitShift;
entropy[byteIndex++] |= (byte) (wordListIndex >> bitShift);
entropy[byteIndex] |= (byte) (wordListIndex << (8 - bitShift));
bitShift = bitShift + BITS_PER_WORD - 8;
}
}
return entropy;
}
/** Convert binary entropy to BIP39 mnemonic */
public static String encode(byte[] entropy, String lang) {
if (lang == null)
lang = "en";
List<String> wordList = BIP39WordList.INSTANCE.getByLang(lang);
if (wordList == null)
throw new IllegalStateException("BIP39 word list for lang '" + lang + "' unavailable");
List<String> phraseWords = new ArrayList<>();
int bitMask = 128; // MSB first
int byteIndex = 0;
while (true) {
int wordListIndex = 0;
for (int bitCount = 0; bitCount < BITS_PER_WORD; ++bitCount) {
wordListIndex <<= 1;
if ((entropy[byteIndex] & bitMask) != 0)
++wordListIndex;
bitMask >>= 1;
if (bitMask == 0) {
bitMask = 128;
++byteIndex;
if (byteIndex >= entropy.length)
return String.join(" ", phraseWords);
}
}
phraseWords.add(wordList.get(wordListIndex));
}
}
}

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@@ -49,7 +49,9 @@
},
"featureTriggers": {
"atFindNextTransactionFix": 275000,
"newBlockSigHeight": 320000
"newBlockSigHeight": 320000,
"shareBinFix": 399000,
"calcChainWeightTimestamp": 1620579600000
},
"genesisInfo": {
"version": 4,

View File

@@ -1,14 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "de",
### Common ###
JSON = JSON nachricht konnte nicht geparsed werden
INSUFFICIENT_BALANCE = insufficient balance
UNAUTHORIZED = API call unauthorized
REPOSITORY_ISSUE = repository error
NON_PRODUCTION = this API call is not permitted for production systems
BLOCKCHAIN_NEEDS_SYNC = blockchain needs to synchronize first
NO_TIME_SYNC = no clock synchronization yet
### Validation ###
INVALID_SIGNATURE = ungültige signatur
INVALID_ADDRESS = ungültige adresse
INVALID_ASSET_ID = ungültige asset ID
INVALID_PUBLIC_KEY = ungültiger public key
INVALID_DATA = ungültige daten
INVALID_PUBLIC_KEY = ungültiger public key
INVALID_NETWORK_ADDRESS = invalid network address
INVALID_SIGNATURE = ungültige signatur
ADDRESS_UNKNOWN = account address unknown
JSON = JSON nachricht konnte nicht geparsed werden
INVALID_CRITERIA = invalid search criteria
INVALID_REFERENCE = invalid reference
TRANSFORMATION_ERROR = could not transform JSON into transaction
INVALID_PRIVATE_KEY = invalid private key
INVALID_HEIGHT = invalid block height
CANNOT_MINT = account cannot mint
### Blocks ###
BLOCK_UNKNOWN = block unknown
### Transactions ###
TRANSACTION_UNKNOWN = transaction unknown
PUBLIC_KEY_NOT_FOUND = public key wurde nicht gefunden
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transaction invalid: %s (%s)
### Naming ###
NAME_UNKNOWN = name unknown
### Asset ###
INVALID_ASSET_ID = ungültige asset ID
INVALID_ORDER_ID = invalid asset order ID
ORDER_UNKNOWN = unknown asset order ID
### Groups ###
GROUP_UNKNOWN = group unknown
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

View File

@@ -1,66 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
ADDRESS_UNKNOWN = account address unknown
BLOCKCHAIN_NEEDS_SYNC = blockchain needs to synchronize first
# Blocks
BLOCK_UNKNOWN = block unknown
BTC_BALANCE_ISSUE = insufficient Bitcoin balance
BTC_NETWORK_ISSUE = Bitcoin/ElectrumX network issue
BTC_TOO_SOON = too soon to broadcast Bitcoin transaction (lockTime/median block time)
CANNOT_MINT = account cannot mint
GROUP_UNKNOWN = group unknown
INVALID_ADDRESS = invalid address
# Assets
INVALID_ASSET_ID = invalid asset ID
INVALID_CRITERIA = invalid search criteria
INVALID_DATA = invalid data
INVALID_HEIGHT = invalid block height
INVALID_NETWORK_ADDRESS = invalid network address
INVALID_ORDER_ID = invalid asset order ID
INVALID_PRIVATE_KEY = invalid private key
INVALID_PUBLIC_KEY = invalid public key
INVALID_REFERENCE = invalid reference
# Validation
INVALID_SIGNATURE = invalid signature
# "localeLang": "en",
### Common ###
JSON = failed to parse JSON message
NAME_UNKNOWN = name unknown
INSUFFICIENT_BALANCE = insufficient balance
NON_PRODUCTION = this API call is not permitted for production systems
NO_TIME_SYNC = no clock synchronization yet
ORDER_UNKNOWN = unknown asset order ID
PUBLIC_KEY_NOT_FOUND = public key not found
UNAUTHORIZED = API call unauthorized
REPOSITORY_ISSUE = repository error
# This one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transaction invalid: %s (%s)
NON_PRODUCTION = this API call is not permitted for production systems
TRANSACTION_UNKNOWN = transaction unknown
BLOCKCHAIN_NEEDS_SYNC = blockchain needs to synchronize first
NO_TIME_SYNC = no clock synchronization yet
### Validation ###
INVALID_SIGNATURE = invalid signature
INVALID_ADDRESS = invalid address
INVALID_PUBLIC_KEY = invalid public key
INVALID_DATA = invalid data
INVALID_NETWORK_ADDRESS = invalid network address
ADDRESS_UNKNOWN = account address unknown
INVALID_CRITERIA = invalid search criteria
INVALID_REFERENCE = invalid reference
TRANSFORMATION_ERROR = could not transform JSON into transaction
UNAUTHORIZED = API call unauthorized
INVALID_PRIVATE_KEY = invalid private key
INVALID_HEIGHT = invalid block height
CANNOT_MINT = account cannot mint
### Blocks ###
BLOCK_UNKNOWN = block unknown
### Transactions ###
TRANSACTION_UNKNOWN = transaction unknown
PUBLIC_KEY_NOT_FOUND = public key not found
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transaction invalid: %s (%s)
### Naming ###
NAME_UNKNOWN = name unknown
### Asset ###
INVALID_ASSET_ID = invalid asset ID
INVALID_ORDER_ID = invalid asset order ID
ORDER_UNKNOWN = unknown asset order ID
### Groups ###
GROUP_UNKNOWN = group unknown
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

View File

@@ -1,71 +1,86 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
#
# Kielen muuttaminen suomeksi tapahtuu settings.json-tiedostossa
#
# "localeLang": "fi",
# muista pilkku lopussa jos komento ei ole viimeisellä rivillä
ADDRESS_UNKNOWN = tilin osoite on tuntematon
BLOCKCHAIN_NEEDS_SYNC = lohkoketjun tarvitsee ensin synkronisoitua
# Blocks
BLOCK_UNKNOWN = tuntematon lohko
BTC_BALANCE_ISSUE = riittämätön Bitcoin-saldo
BTC_NETWORK_ISSUE = Bitcoin/ElectrumX -verkon ongelma
BTC_TOO_SOON = liian aikaista julkistaa Bitcoin-tapahtumaa (lukitusaika/mediiaanilohkoaika)
CANNOT_MINT = tili ei voi lyödä rahaa
GROUP_UNKNOWN = tuntematon ryhmä
INVALID_ADDRESS = osoite on kelvoton
# Assets
INVALID_ASSET_ID = kelvoton ID resurssille
INVALID_CRITERIA = kelvoton hakuehto
INVALID_DATA = kelvoton data
INVALID_HEIGHT = kelvoton lohkon korkeus
INVALID_NETWORK_ADDRESS = kelvoton verkko-osoite
INVALID_ORDER_ID = kelvoton resurssin tilaus-ID
INVALID_PRIVATE_KEY = kelvoton yksityinen avain
INVALID_PUBLIC_KEY = kelvoton julkinen avain
INVALID_REFERENCE = kelvoton viite
# Validation
INVALID_SIGNATURE = kelvoton allekirjoitus
### Common ###
JSON = JSON-viestin jaottelu epäonnistui
NAME_UNKNOWN = tuntematon nimi
INSUFFICIENT_BALANCE = insufficient balance
NON_PRODUCTION = tämä API-kutsu on kielletty tuotantoversiossa
NO_TIME_SYNC = kello vielä synkronisoimatta
ORDER_UNKNOWN = tuntematon resurssin tilaus-ID
PUBLIC_KEY_NOT_FOUND = julkista avainta ei löytynyt
UNAUTHORIZED = luvaton API-kutsu
REPOSITORY_ISSUE = tietovarantovirhe (repo)
# This one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = kelvoton transaktio: %s (%s)
NON_PRODUCTION = tämä API-kutsu on kielletty tuotantoversiossa
TRANSACTION_UNKNOWN = tuntematon transaktio
BLOCKCHAIN_NEEDS_SYNC = lohkoketjun tarvitsee ensin synkronisoitua
NO_TIME_SYNC = kello vielä synkronisoimatta
### Validation ###
INVALID_SIGNATURE = kelvoton allekirjoitus
INVALID_ADDRESS = osoite on kelvoton
INVALID_PUBLIC_KEY = kelvoton julkinen avain
INVALID_DATA = kelvoton data
INVALID_NETWORK_ADDRESS = kelvoton verkko-osoite
ADDRESS_UNKNOWN = tilin osoite on tuntematon
INVALID_CRITERIA = kelvoton hakuehto
INVALID_REFERENCE = kelvoton viite
TRANSFORMATION_ERROR = JSON:in muuntaminen transaktioksi epäonnistui
UNAUTHORIZED = luvaton API-kutsu
INVALID_PRIVATE_KEY = kelvoton yksityinen avain
INVALID_HEIGHT = kelvoton lohkon korkeus
CANNOT_MINT = tili ei voi lyödä rahaa
### Blocks ###
BLOCK_UNKNOWN = tuntematon lohko
### Transactions ###
TRANSACTION_UNKNOWN = tuntematon transaktio
PUBLIC_KEY_NOT_FOUND = julkista avainta ei löytynyt
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = kelvoton transaktio: %s (%s)
### Naming ###
NAME_UNKNOWN = tuntematon nimi
### Asset ###
INVALID_ASSET_ID = kelvoton ID resurssille
INVALID_ORDER_ID = kelvoton resurssin tilaus-ID
ORDER_UNKNOWN = tuntematon resurssin tilaus-ID
### Groups ###
GROUP_UNKNOWN = tuntematon ryhmä
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

View File

@@ -0,0 +1,86 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Magyar myelvre forditotta: Szkíta (Scythian). 2021 Augusztus 7.
# Az alkalmazás nyelvének magyarra való változtatása a settings.json oldalon történik.
# "localeLang": "hu",
### Common ###
JSON = nem sikerült elemezni a JSON üzenetet
INSUFFICIENT_BALANCE = elégtelen egyenleg
UNAUTHORIZED = nem engedélyezett API-hívás
REPOSITORY_ISSUE = adattári hiba
NON_PRODUCTION = ez az API-hívás nem engedélyezett korlátozott rendszereken
BLOCKCHAIN_NEEDS_SYNC = a blokkláncnak még szinkronizálnia kell
NO_TIME_SYNC = az óraszinkronizálás még nem történt meg
### Validation ###
INVALID_SIGNATURE = érvénytelen aláírás
INVALID_ADDRESS = érvénytelen fiók cím
INVALID_PUBLIC_KEY = érvénytelen nyilvános kulcs
INVALID_DATA = érvénytelen adat
INVALID_NETWORK_ADDRESS = érvénytelen hálózat cím
ADDRESS_UNKNOWN = ismeretlen fiók cím
INVALID_CRITERIA = érvénytelen keresési feltétel
INVALID_REFERENCE = érvénytelen hivatkozás
TRANSFORMATION_ERROR = nem sikerült tranzakcióvá alakítani a JSON-t
INVALID_PRIVATE_KEY = érvénytelen privát kulcs
INVALID_HEIGHT = érvénytelen blokkmagasság
CANNOT_MINT = ez a fiók még nem tud QORT-ot verni
### Blocks ###
BLOCK_UNKNOWN = ismeretlen blokk
### Transactions ###
TRANSACTION_UNKNOWN = ismeretlen tranzakció
PUBLIC_KEY_NOT_FOUND = nyilvános kulcs nem található
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = érvénytelen tranzakció: %s (%s)
### Naming ###
NAME_UNKNOWN = ismeretlen név
### Asset ###
INVALID_ASSET_ID = érvénytelen eszközazonosító
INVALID_ORDER_ID = érvénytelen eszközrendelési azonosító
ORDER_UNKNOWN = ismeretlen eszközrendelési azonosító
### Groups ###
GROUP_UNKNOWN = ismeretlen csoport
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = idegen blokklánc vagy ElectrumX hálózati probléma
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = elégtelen egyenleg az idegen blokkláncon
FOREIGN_BLOCKCHAIN_TOO_SOON = túl korai meghírdetni az idegen blokkláncon való tranzakciót (LockTime/medián blokkidő)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = rendelési összeg túl alacsony
### Data ###
FILE_NOT_FOUND = fájl nem található
NO_REPLY = a másik csomópont nem válaszolt

View File

@@ -7,66 +7,81 @@
# "localeLang": "it",
# Si prega ricordare la virgola alla fine, se questo comando non è sull'ultima riga
ADDRESS_UNKNOWN = indirizzo account sconosciuto
BLOCKCHAIN_NEEDS_SYNC = blockchain deve prima sincronizzarsi
# Blocks
BLOCK_UNKNOWN = blocco sconosciuto
BTC_BALANCE_ISSUE = saldo Bitcoin insufficiente
BTC_NETWORK_ISSUE = Bitcoin/ElectrumX problema di rete
BTC_TOO_SOON = troppo presto per trasmettere transazione Bitcoin (tempo di blocco / tempo di blocco mediano)
CANNOT_MINT = l'account non può coniare
GROUP_UNKNOWN = gruppo sconosciuto
INVALID_ADDRESS = indirizzo non valido
# Assets
INVALID_ASSET_ID = identificazione risorsa non valida
INVALID_CRITERIA = criteri di ricerca non validi
INVALID_DATA = dati non validi
INVALID_HEIGHT = altezza blocco non valida
INVALID_NETWORK_ADDRESS = indirizzo di rete non valido
INVALID_ORDER_ID = identificazione di ordine di risorsa non valida
INVALID_PRIVATE_KEY = chiave privata non valida
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_REFERENCE = riferimento non valido
# Validation
INVALID_SIGNATURE = firma non valida
### Common ###
JSON = Impossibile analizzare il messaggio JSON
NAME_UNKNOWN = nome sconosciuto
INSUFFICIENT_BALANCE = insufficient balance
NON_PRODUCTION = questa chiamata API non è consentita per i sistemi di produzione
NO_TIME_SYNC = nessuna sincronizzazione dell'orologio ancora
ORDER_UNKNOWN = identificazione di ordine di risorsa sconosciuta
PUBLIC_KEY_NOT_FOUND = chiave pubblica non trovata
UNAUTHORIZED = Chiamata API non autorizzata
REPOSITORY_ISSUE = errore del repositorio
# This one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transazione non valida: %s (%s)
NON_PRODUCTION = questa chiamata API non è consentita per i sistemi di produzione
TRANSACTION_UNKNOWN = transazione sconosciuta
BLOCKCHAIN_NEEDS_SYNC = blockchain deve prima sincronizzarsi
NO_TIME_SYNC = nessuna sincronizzazione dell'orologio ancora
### Validation ###
INVALID_SIGNATURE = firma non valida
INVALID_ADDRESS = indirizzo non valido
INVALID_PUBLIC_KEY = chiave pubblica non valida
INVALID_DATA = dati non validi
INVALID_NETWORK_ADDRESS = indirizzo di rete non valido
ADDRESS_UNKNOWN = indirizzo account sconosciuto
INVALID_CRITERIA = criteri di ricerca non validi
INVALID_REFERENCE = riferimento non valido
TRANSFORMATION_ERROR = non è stato possibile trasformare JSON in transazione
UNAUTHORIZED = Chiamata API non autorizzata
INVALID_PRIVATE_KEY = chiave privata non valida
INVALID_HEIGHT = altezza blocco non valida
CANNOT_MINT = l'account non può coniare
### Blocks ###
BLOCK_UNKNOWN = blocco sconosciuto
### Transactions ###
TRANSACTION_UNKNOWN = transazione sconosciuta
PUBLIC_KEY_NOT_FOUND = chiave pubblica non trovata
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transazione non valida: %s (%s)
### Naming ###
NAME_UNKNOWN = nome sconosciuto
### Asset ###
INVALID_ASSET_ID = identificazione risorsa non valida
INVALID_ORDER_ID = identificazione di ordine di risorsa non valida
ORDER_UNKNOWN = identificazione di ordine di risorsa sconosciuta
### Groups ###
GROUP_UNKNOWN = gruppo sconosciuto
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "nl",
### Common ###
JSON = lezen van JSON bericht gefaald
INSUFFICIENT_BALANCE = insufficient balance
UNAUTHORIZED = ongeautoriseerde API call
REPOSITORY_ISSUE = repository fout
NON_PRODUCTION = deze API call is niet toegestaan voor productiesystemen
BLOCKCHAIN_NEEDS_SYNC = blockchain dient eerst gesynchronizeerd te worden
NO_TIME_SYNC = klok nog niet gesynchronizeerd
### Validation ###
INVALID_SIGNATURE = ongeldige handtekening
INVALID_ADDRESS = ongeldig adres
INVALID_PUBLIC_KEY = ongeldige public key
INVALID_DATA = ongeldige gegevens
INVALID_NETWORK_ADDRESS = ongeldig netwerkadres
ADDRESS_UNKNOWN = account adres onbekend
INVALID_CRITERIA = ongeldige zoekcriteria
INVALID_REFERENCE = ongeldige verwijzing
TRANSFORMATION_ERROR = JSON kon niet omgezet worden in transactie
INVALID_PRIVATE_KEY = ongeldige private key
INVALID_HEIGHT = ongeldige blokhoogte
CANNOT_MINT = account kan niet munten
### Blocks ###
BLOCK_UNKNOWN = blok onbekend
### Transactions ###
TRANSACTION_UNKNOWN = onbekende transactie
PUBLIC_KEY_NOT_FOUND = public key niet gevonden
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = ongeldige transactie: %s (%s)
### Naming ###
NAME_UNKNOWN = onbekende naam
### Asset ###
INVALID_ASSET_ID = ongeldige asset ID
INVALID_ORDER_ID = ongeldige asset order ID
ORDER_UNKNOWN = onbekende asset order ID
### Groups ###
GROUP_UNKNOWN = onbekende groep
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

View File

@@ -1,57 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
ADDRESS_UNKNOWN = неизвестная учетная запись
BLOCKCHAIN_NEEDS_SYNC = блокчейн должен сначала синхронизироваться
# Blocks
BLOCK_UNKNOWN = неизвестный блок
CANNOT_MINT = аккаунт не может чеканить
GROUP_UNKNOWN = неизвестная группа
INVALID_ADDRESS = неизвестный адрес
# Assets
INVALID_ASSET_ID = неверный идентификатор актива
INVALID_CRITERIA = неверные критерии поиска
INVALID_DATA = неверные данные
INVALID_HEIGHT = недопустимая высота блока
INVALID_NETWORK_ADDRESS = неверный сетевой адрес
INVALID_ORDER_ID = неверный идентификатор заказа актива
INVALID_PRIVATE_KEY = неверный приватный ключ
INVALID_PUBLIC_KEY = недействительный открытый ключ
INVALID_REFERENCE = неверная ссылка
# Validation
INVALID_SIGNATURE = недействительная подпись
# "localeLang": "ru",
### Common ###
JSON = не удалось разобрать сообщение json
NAME_UNKNOWN = имя неизвестно
INSUFFICIENT_BALANCE = insufficient balance
NON_PRODUCTION = этот вызов API не разрешен для производственных систем
ORDER_UNKNOWN = неизвестный идентификатор заказа актива
PUBLIC_KEY_NOT_FOUND = открытый ключ не найден
UNAUTHORIZED = вызов API не авторизован
REPOSITORY_ISSUE = ошибка репозитория
TRANSACTION_INVALID = транзакция недействительна: %s (%s)
NON_PRODUCTION = этот вызов API не разрешен для производственных систем
TRANSACTION_UNKNOWN = транзакция неизвестна
BLOCKCHAIN_NEEDS_SYNC = блокчейн должен сначала синхронизироваться
NO_TIME_SYNC = no clock synchronization yet
### Validation ###
INVALID_SIGNATURE = недействительная подпись
INVALID_ADDRESS = неизвестный адрес
INVALID_PUBLIC_KEY = недействительный открытый ключ
INVALID_DATA = неверные данные
INVALID_NETWORK_ADDRESS = неверный сетевой адрес
ADDRESS_UNKNOWN = неизвестная учетная запись
INVALID_CRITERIA = неверные критерии поиска
INVALID_REFERENCE = неверная ссылка
TRANSFORMATION_ERROR = не удалось преобразовать JSON в транзакцию
UNAUTHORIZED = вызов API не авторизован
INVALID_PRIVATE_KEY = неверный приватный ключ
INVALID_HEIGHT = недопустимая высота блока
CANNOT_MINT = аккаунт не может чеканить
### Blocks ###
BLOCK_UNKNOWN = неизвестный блок
### Transactions ###
TRANSACTION_UNKNOWN = транзакция неизвестна
PUBLIC_KEY_NOT_FOUND = открытый ключ не найден
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = транзакция недействительна: %s (%s)
### Naming ###
NAME_UNKNOWN = имя неизвестно
### Asset ###
INVALID_ASSET_ID = неверный идентификатор актива
INVALID_ORDER_ID = неверный идентификатор заказа актива
ORDER_UNKNOWN = неизвестный идентификатор заказа актива
### Groups ###
GROUP_UNKNOWN = неизвестная группа
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data

Some files were not shown because too many files have changed in this diff Show More