Compare commits

...

129 Commits

Author SHA1 Message Date
CalDescent
075385d3ff Bump version to 3.4.3 2022-07-31 19:50:31 +01:00
CalDescent
6ed8250301 onlineAccountsModulusV2Timestamp set to Sat, 06 Aug 2022 16:00:00 UTC 2022-07-31 19:50:05 +01:00
CalDescent
06b5d5f1d0 Merge branch 'increase-online-timestamp-modulus' 2022-07-31 13:00:47 +01:00
CalDescent
d6d2641cad Added "bitcoinjLookaheadSize" setting (default 50).
This replaces the WALLET_KEY_LOOKAHEAD_INCREMENT_BITCOINJ constant.
2022-07-31 12:07:44 +01:00
CalDescent
e71f22fd2c Added "gapLimit" setting.
This replaces the previously hardcoded "numberOfAdditionalBatchesToSearch" variable, and specifies the minimum number of empty consecutive addresses required before a set of wallet transactions is considered complete. Used for foreign transaction lists and balances.
2022-07-31 11:59:14 +01:00
CalDescent
c996633732 Added trace level logging. 2022-07-30 19:06:04 +01:00
CalDescent
55f973af3c Ensure all online accounts timestamps are a multiple of the online timestamp modulus.
This is a simple way to discard the 5-minute online account timestamps (from out of date nodes) once the switch to 30-minute online account timestamps has taken place.
2022-07-30 18:15:16 +01:00
CalDescent
fe9744eec6 Fixed missing feature trigger in testchain config 2022-07-30 14:52:47 +01:00
CalDescent
410fa59430 Merge branch 'master' into increase-online-timestamp-modulus
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
2022-07-30 12:23:25 +01:00
CalDescent
522ae2bce7 Updated AdvancedInstaller project for v3.4.2 2022-07-27 22:52:48 +01:00
CalDescent
a6e79947b8 Bump version to 3.4.2 2022-07-23 18:07:35 +01:00
CalDescent
fcd0d71cb6 Merge branch 'share-bin-activation' 2022-07-17 19:20:19 +01:00
catbref
275bee62d9 Revert BlockMinter to using long-lifetime repository session.
Although BlockMinter could reattach a repository session to its cache of potential blocks,
and these blocks would in turn reattach that repository session to their transactions,
further transaction-specific fields (e.g. creator PublicKeyAccount) were not being updated.

This would lead to NPEs like the following:

Exception in thread "BlockMinter" java.lang.NullPointerException
        at org.qortal.repository.hsqldb.HSQLDBRepository.cachePreparedStatement(HSQLDBRepository.java:587)
        at org.qortal.repository.hsqldb.HSQLDBRepository.prepareStatement(HSQLDBRepository.java:569)
        at org.qortal.repository.hsqldb.HSQLDBRepository.checkedExecute(HSQLDBRepository.java:609)
        at org.qortal.repository.hsqldb.HSQLDBAccountRepository.getBalance(HSQLDBAccountRepository.java:327)
        at org.qortal.account.Account.getConfirmedBalance(Account.java:72)
        at org.qortal.transaction.MessageTransaction.isValid(MessageTransaction.java:200)
        at org.qortal.block.Block.areTransactionsValid(Block.java:1190)
        at org.qortal.block.Block.isValid(Block.java:1137)
        at org.qortal.controller.BlockMinter.run(BlockMinter.java:301)

where the Account has an associated repository session which is now obsolete.

This commit reverts BlockMinter back to obtaining a repository session before entering main loop.
2022-07-17 13:53:07 +01:00
CalDescent
97221a4449 Added test to simulate level 7-8 reward tier activation, including orphaning. 2022-07-17 13:37:52 +01:00
CalDescent
508a34684b Revert "qoraHoldersShare reworked to qoraHoldersShareByHeight."
This reverts commit 90e8cfc737.

# Conflicts:
#	src/test/java/org/qortal/test/minting/RewardTests.java
2022-07-16 18:45:49 +01:00
CalDescent
3d2144f303 Check orphaning in levels 7-8 and 9-10 reward tests. This would have been tested in orphanCheck() anyway, but this makes the testing a bit more granular. 2022-07-16 17:29:16 +01:00
CalDescent
35f3430687 Added share bin activation feature.
To prevent a single or very small number of minters receiving the rewards for an entire tier, share bins can now require "activation". This adds the requirement that a minimum number of accounts must be present in a share bin before it is considered active. When inactive, the rewards and minters are added to the previous tier.

Summary of new functionality:

- If a share bin has more than one, but less than 30 accounts present, the rewards and accounts are shifted to the previous share bin.
- This process is iterative, so the accounts can shift through multiple tiers until the minimum number of accounts is met, OR the share bin's starting level is less than shareBinActivationMinLevel.
- Applies to level 7+, so that no backwards support is needed. It will only take effect once the first account reaches level 7.

This requires hot swapping the sharesByLevel data to combine tiers where needed, so is a considerable shift away from the immutable array that was in place previously.

All existing and new unit tests are now passing, however a lot more testing will be needed.
2022-07-10 12:09:44 +01:00
CalDescent
90e8cfc737 qoraHoldersShare reworked to qoraHoldersShareByHeight.
This allows the QORA share percentage to be modified at different heights, based on community votes. Added unit test to simulate a reduction.
2022-07-08 11:12:58 +01:00
CalDescent
57bd3c3459 Merge remote-tracking branch 'catbref/auto-update-fix' 2022-07-07 18:48:39 +01:00
CalDescent
ad0d8fac91 Bump version to 3.4.1 2022-07-05 20:56:40 +01:00
CalDescent
a8b58d2007 Reward share limit activation timestamp set to 1657382400000 (Sat Jul 09 2022 16:00:00 UTC) 2022-07-05 20:34:23 +01:00
CalDescent
a099ecf55b Merge branch 'reduce-reward-shares' 2022-07-04 19:58:48 +01:00
CalDescent
6b91b0477d Added version query string param to /blocks/signature/{signature}/data API endpoint, to allow for optional V2 block serialization (with a single combined AT states hash).
Version can only be specified when querying unarchived blocks; archived blocks require V1 for now (and possibly V2 in the future).
2022-07-04 19:57:54 +01:00
CalDescent
d7e7c1f48c Fixed bugs from merge conflict, causing incorrect systray statuses in some cases. 2022-07-02 10:33:00 +01:00
CalDescent
65d63487f3 Merge branch 'master' into increase-online-timestamp-modulus 2022-07-01 17:46:35 +01:00
CalDescent
7c5932a512 GET /admin/status now returns online account submission status for "isMintingPossible", instead of BlockMinter status.
Online account credit is a more useful definition of "minting" than block signing, from the user's perspective. Should bring UI minting/syncing status in line with the core's systray status.
2022-07-01 17:29:15 +01:00
CalDescent
610a3fcf83 Improved order in getNodeType() 2022-07-01 16:48:57 +01:00
CalDescent
b329dc41bc Updated incorrect ONLINE_ACCOUNTS_V3_PEER_VERSION to 3.4.0 2022-07-01 13:36:56 +01:00
CalDescent
ff78606153 Merge branch 'master' into increase-online-timestamp-modulus 2022-07-01 13:14:15 +01:00
CalDescent
ef249066cd Updated another reference of SimpleTransaction::getTimestamp 2022-07-01 13:13:55 +01:00
CalDescent
80188629df Don't aggregate signatures when running OnlineAccountsTests, as it's too difficult to check how many unique signatures exist over a given period of time. 2022-07-01 13:13:22 +01:00
CalDescent
f77093731c Merge branch 'master' into increase-online-timestamp-modulus
# Conflicts:
#	src/main/java/org/qortal/block/Block.java
#	src/main/java/org/qortal/controller/OnlineAccountsManager.java
2022-07-01 13:03:19 +01:00
CalDescent
ca7d58c272 SimpleTransaction.timestamp is now in milliseconds instead of seconds.
Should fix 1970 timestamp issue in UI for foreign transactions, and also maintains consistency with QORT wallet transactions.
2022-07-01 12:46:20 +01:00
CalDescent
08f3351a7a Reward share transaction modifications:
- Reduce concurrent reward share limit from 6 to 3 (or from 5 to 2 when including self share) - as per community vote.
- Founders remain at 6 (5 when including self share) - also decided in community vote.
- When all slots are being filled, require that at least one is a self share, so that not all can be used for sponsorship.
- Activates at future undecided timestamp.
2022-07-01 12:18:48 +01:00
QuickMythril
f499ada94c Merge pull request #91 from qortish/master
Update SysTray_sv.properties
2022-06-29 08:44:47 -04:00
qortish
f073040c06 Update SysTray_sv.properties
proper
2022-06-29 14:04:27 +02:00
CalDescent
49bfb43bd2 Updated AdvancedInstaller project for v3.4.0 2022-06-28 22:56:11 +01:00
CalDescent
425c70719c Bump version to 3.4.0 2022-06-28 19:29:09 +01:00
CalDescent
1420aea600 aggregateSignatureTimestamp set to 1656864000000 (Sun Jul 03 2022 16:00:00 UTC) 2022-06-28 19:26:00 +01:00
CalDescent
4543062700 Updated blockchain.json files in unit tests to include an already active "aggregateSignatureTimestamp" 2022-06-28 19:22:54 +01:00
CalDescent
722468a859 Restrict relays to v3.4.0 peers and above, in attempt to avoid bugs causing older peers to break relay chains. 2022-06-27 19:38:30 +01:00
CalDescent
492a9ed3cf Fixed more message rebroadcasts that were missing IDs. 2022-06-26 20:02:08 +01:00
CalDescent
420b577606 No longer adding inferior chain signatures in comparePeers() as it doesn't seem 100% reliable in some cases. It's better to re-check weights on each pass. 2022-06-26 18:24:33 +01:00
CalDescent
434038fd12 Reduced online accounts log spam 2022-06-26 16:34:04 +01:00
CalDescent
a9b154b783 Modified BlockMinter.higherWeightChainExists() so that it checks for invalid blocks before treating a chain as higher weight. Otherwise minting is slowed down when a higher weight but invalid chain exists on the network (e.g. after a hard fork). 2022-06-26 15:54:41 +01:00
CalDescent
a01652b816 Removed hasInvalidBlock filtering, as this was unnecessary risk now that the original bug in comparePeers() is fixed. 2022-06-26 10:09:24 +01:00
CalDescent
4440e82bb9 Fixed long term bug in comparePeers() causing peers with invalid blocks to prevent alternate valid but lower weight candidates from being chosen. 2022-06-25 16:34:42 +01:00
CalDescent
a2e1efab90 Synchronize hasInvalidBlock predicate, as it wasn't thread safe 2022-06-25 14:12:21 +01:00
CalDescent
7e1ce38f0a Fixed major bug in hasInvalidBlock predicate 2022-06-25 14:11:25 +01:00
CalDescent
a93bae616e Invalid signatures are now stored as ByteArray instead of String, to avoid regular Base58 encoding and decoding, which is very inefficient. 2022-06-25 13:29:53 +01:00
CalDescent
a2568936a0 Synchronizer: filter out peers reporting to hold invalid block signatures.
We already mark peers as misbehaved if they returned invalid signatures, but this wasn't sufficient when multiple copies of the same invalid block exist on the network (e.g. after a hard fork). In these cases, we need to be more proactive to avoid syncing with these peers, to increase the chances of preserving other candidate blocks.
2022-06-25 12:45:19 +01:00
CalDescent
23408827b3 Merge remote-tracking branch 'catbref/schnorr-agg-BlockMinter-fix' into schnorr-agg-BlockMinter-fix
# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
#	src/main/java/org/qortal/controller/OnlineAccountsManager.java
#	src/main/java/org/qortal/network/message/BlockV2Message.java
#	src/main/resources/blockchain.json
#	src/test/resources/test-chain-v2.json
2022-06-24 11:47:58 +01:00
CalDescent
ae6e2fab6f Rewrite of isNotOldPeer predicate, to fix logic issue (second attempt - first had too many issues)
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.

On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
2022-06-24 10:36:06 +01:00
CalDescent
3af36644c0 Revert "Rewrite of isNotOldPeer predicate, to fix logic issue."
This reverts commit d81071f254.
2022-06-24 10:26:39 +01:00
CalDescent
db8f627f1a Default minPeerVersion set to 3.3.7 2022-06-24 10:13:55 +01:00
CalDescent
5db0fa080b Prune peers every 5 minutes instead of every cycle of the Controller thread.
This should reduce the amount of time the allKnownPeers lock is held.
2022-06-24 10:13:36 +01:00
CalDescent
d81071f254 Rewrite of isNotOldPeer predicate, to fix logic issue.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.

On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
2022-06-24 10:11:46 +01:00
QuickMythril
ba148dfd88 Added Korean translations
credit: TL (Discord username)
2022-06-23 02:35:42 -04:00
CalDescent
dbcb457a04 Merge branch 'master' of github.com:Qortal/qortal 2022-06-20 22:51:33 +01:00
CalDescent
b00e1c8f47 Allow online account submission in all cases when in recovery mode. 2022-06-20 22:50:41 +01:00
CalDescent
899a6eb104 Rework of systray statuses
- Show "Minting" as long as online accounts are submitted to the network (previously it related to block signing).
- Fixed bug causing it to regularly show "Synchronizing 100%".
- Only show "Synchronizing" if the chain falls more than 2 hours behind - anything less is unnecessary noise.
2022-06-20 22:48:32 +01:00
CalDescent
6e556c82a3 Updated AdvancedInstaller project for v3.3.7 2022-06-20 22:25:40 +01:00
CalDescent
35ce64cc3a Bump version to 3.3.7 2022-06-20 21:52:41 +01:00
CalDescent
09b218d16c Merge branch 'master' of github.com:Qortal/qortal 2022-06-20 21:51:39 +01:00
CalDescent
7ea451e027 Allow trades to be initiated, and QDN data to be published, as long as the latest block is within 60 minutes of the current time. Again this should remove negative effects of larger re-orgs from the UX. 2022-06-20 21:26:48 +01:00
CalDescent
ffb27c3946 Further relaxed min latest block timestamp age to be considered "up to date" in a few places, from 30 to 60 mins. This should help reduce the visible effects of larger re-orgs if they happen again. 2022-06-20 21:25:14 +01:00
QuickMythril
6e7d2b50a0 Added Romanian translations
credit: Ovidiu (Telegram username)
2022-06-20 01:27:28 -04:00
QuickMythril
bd025f30ff Updated ApiError German translations
credit: CD (Discord username)
2022-06-20 00:56:19 -04:00
CalDescent
c6cbd8e826 Revert "Sync behaviour changes:"
This reverts commit 8a76c6c0de.
2022-06-19 19:10:37 +01:00
CalDescent
b85afe3ca7 Revert "Keep existing findCommonBlocksWithPeers() and comparePeers() behaviour prior to consensus switchover, to reduce the number of variables."
This reverts commit fecfac5ad9.
2022-06-19 19:10:30 +01:00
CalDescent
5a4674c973 Revert "newConsensusTimestamp set to Sun Jun 19 2022 16:00:00 UTC"
This reverts commit 55a0c10855.
2022-06-19 19:09:57 +01:00
CalDescent
769418e5ae Revert "Fixed unit tests due to missing feature trigger"
This reverts commit 28f9df7178.
2022-06-19 19:09:52 +01:00
CalDescent
38faed5799 Don't submit online accounts if node is more than 2 hours of sync. 2022-06-19 18:56:08 +01:00
catbref
10a578428b Improve intermittent auto-update failures, mostly under non-Windows environments.
Symptoms are:
* AutoUpdate trying to run new ApplyUpdate process, but nothing appears in log-apply-update.?.txt
* Main qortal.jar process continues to run without updating
* Last AutoUpdate line in log.txt.? is:
	2022-06-18 15:42:46 INFO  AutoUpdate:258 - Applying update with: /usr/local/openjdk11/bin/java -Djava.net.preferIPv4Stack=false -Xss256k -Xmx1024m -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:5005 -cp new-qortal.jar org.qortal.ApplyUpdate

Changes are:
* child process now inherits parent's stdout / stderr (was piped from parent)
* child process is given a fresh stdin, which is immediately closed
* AutoUpdate now converts -agentlib JVM arg to -DQORTAL_agentlib
* ApplyUpdate converts -DQORTAL_agentlib to -agentlib

The latter two changes are to prevent a conflict where two processes try to reuse the same JVM debugging port number.
2022-06-19 18:25:00 +01:00
CalDescent
96cdf4a87e Updated AdvancedInstaller project for v3.3.6 2022-06-19 11:41:50 +01:00
CalDescent
c0b1580561 Bump version to 3.3.6 2022-06-17 20:42:51 +01:00
CalDescent
28f9df7178 Fixed unit tests due to missing feature trigger 2022-06-17 13:07:33 +01:00
CalDescent
55a0c10855 newConsensusTimestamp set to Sun Jun 19 2022 16:00:00 UTC 2022-06-17 13:04:04 +01:00
CalDescent
7c5165763d Merge branch 'sync-long-tip' 2022-06-17 13:02:08 +01:00
CalDescent
d2836ebcb9 Make sure to set the ID when rebroadcasting messages. Hopeful fix for QDN networking issues. 2022-06-16 22:39:51 +01:00
CalDescent
fecfac5ad9 Keep existing findCommonBlocksWithPeers() and comparePeers() behaviour prior to consensus switchover, to reduce the number of variables. 2022-06-16 18:38:27 +01:00
CalDescent
5ed1ec8809 Merge remote-tracking branch 'catbref/sync-long-tip' into sync-long-tip 2022-06-16 18:35:16 +01:00
catbref
431cbf01af BlockMinter will discard block candidates that turn out to be invalid just prior to adding transactions, to be potentially reminted in the next pass 2022-06-16 17:47:08 +01:00
CalDescent
af792dfc06 Updated AdvancedInstaller project for v3.3.5 2022-06-15 21:11:12 +01:00
CalDescent
d3b6c5f052 Bump version to 3.3.5 2022-06-14 23:18:46 +01:00
CalDescent
f48eb27f00 Revert "Temporarily limit block minter as first stage of multipart minting fix. This will be reverted almost immediately after release."
This reverts commit 0eebfe4a8c.
2022-06-14 22:42:44 +01:00
CalDescent
b02ac2561f Revert "Safety check - also to be removed shortly."
This reverts commit 8b3f9db497.
2022-06-14 22:42:38 +01:00
CalDescent
1b2f66b201 Bump version to 3.3.4 2022-06-13 23:40:39 +01:00
CalDescent
e992f6b683 Increase column size to allow for approx 10x as many online accounts in each block. 2022-06-13 22:51:48 +01:00
CalDescent
8b3f9db497 Safety check - also to be removed shortly. 2022-06-13 22:41:10 +01:00
CalDescent
0eebfe4a8c Temporarily limit block minter as first stage of multipart minting fix. This will be reverted almost immediately after release. 2022-06-13 22:40:54 +01:00
CalDescent
12b3fc257b Updated AdvancedInstaller project for v3.3.3 2022-06-04 19:50:43 +01:00
CalDescent
66a3322ea6 Bump version to 3.3.3 2022-06-04 19:16:01 +01:00
CalDescent
4965cb7121 Set BlockV2Message min peer version to 3.3.3 2022-06-04 15:50:17 +01:00
CalDescent
b92b1fecb0 disableReferenceTimestamp set to 1655222400000 (Tuesday, 14 June 2022 16:00:00 UTC) 2022-06-04 14:51:37 +01:00
CalDescent
43a75420d0 Merge branch 'disable-reference' 2022-06-04 14:46:51 +01:00
catbref
e85026f866 Initial work on reducing network load for transferring blocks.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.

Set min peer version to 3.3.203 in BlockV2Message class.
2022-06-04 14:43:46 +01:00
CalDescent
ba7b9f3ad8 Added feature to allow repository to be kept intact after running certain tests 2022-06-04 14:17:01 +01:00
catbref
4eb58d3591 BlockTimestampTests to show results from changing blockTimingsByHeight 2022-06-04 12:36:36 +01:00
catbref
8d8e58a905 Network$NetworkProcessor now has its own LOGGER 2022-06-04 12:36:36 +01:00
catbref
8f58da4f52 OnlineAccountsManager:
Bump v3 min peer version from 3.2.203 to 3.3.203
No need for toOnlineAccountTimestamp(long) as we only ever use getCurrentOnlineAccountTimestamp().
Latter now returns Long and does the call to NTP.getTime() on behalf of caller, removing duplicated NTP.getTime() calls and null checks in multiple callers.
Add aggregate-signature feature-trigger timestamp threshold checks where needed, near sign() and verify() calls.
Improve logging - but some logging will need to be removed / reduced before merging.
2022-06-04 12:36:36 +01:00
catbref
a4e2aedde1 Remove debug-hindering "final" modifier from effectively final locals 2022-06-04 12:36:36 +01:00
catbref
24d04fe928 Block.mint() always uses latest timestamped online accounts 2022-06-04 12:36:35 +01:00
catbref
0cf32f6c5e BlockMinter now only acquires repository instance as needed to prevent long HSQLDB rollbacks 2022-06-04 12:35:54 +01:00
catbref
84d850ee0b WIP: use blockchain feature-trigger "aggregateSignatureTimestamp" to determine when online-accounts sigs and block sigs switch to aggregate sigs 2022-06-04 12:35:51 +01:00
catbref
51930d3ccf Move some private key methods to Crypto class 2022-06-04 12:35:15 +01:00
catbref
c5e5316f2e Schnorr public key and signature aggregation for 'online accounts'.
Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.

org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.

BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
2022-06-04 12:35:15 +01:00
catbref
829ab1eb37 Cherry-pick minor fixes from another branch to resolve "No online accounts - not even our own?" issues 2022-06-04 12:35:03 +01:00
catbref
d9b330b46a OnlineAccountData no longer uses signature in equals() or hashCode() because newer aggregate signatures use random nonces and OAD class doesn't care about / verify sigs 2022-06-04 10:49:59 +01:00
catbref
c032b92d0d Logging fix: size() was called on wrong collection, leading to confusing logging output 2022-06-04 10:49:59 +01:00
catbref
ae92a6eed4 OnlineAccountsV3: slightly rework Block.mint() so it doesn't need to filter so many online accounts
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
2022-06-04 10:49:59 +01:00
catbref
712c4463f7 OnlineAccountsV3:
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.

Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.

Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
2022-06-04 10:49:59 +01:00
catbref
fbdc1e1cdb OnlineAccountsV3:
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.

With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.

Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.

Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases

To test, change your peer's version (in pom.xml?) to v3.2.203.
2022-06-04 10:49:59 +01:00
catbref
f2060fe7a1 Initial work on online-accounts-v3 network messages to drastically reduce network load.
Lots of TODOs to action.
2022-06-04 10:49:59 +01:00
catbref
6950c6bf69 Initial work on reducing network load for transferring blocks.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.

Set min peer version to 3.3.203 in BlockV2Message class.
2022-06-04 10:49:11 +01:00
catbref
8a76c6c0de Sync behaviour changes:
* Prefer longer chains
* Compare chain tips
* Skip pre-sync all-peer sorting (for now)
2022-06-04 10:23:31 +01:00
CalDescent
ef51cf5702 Added defensiveness in getOnlineTimestampModulus(), just in case NTP.getTime() returns null 2022-06-02 12:46:11 +01:00
CalDescent
0c3988202e Merge branch 'master' into increase-online-timestamp-modulus 2022-06-02 12:38:05 +01:00
CalDescent
987446cf7f Updated AdvancedInstaller project for v3.3.2 2022-06-02 11:31:56 +01:00
CalDescent
d2fc705846 Merge remote-tracking branch 'catbref/UnsupportedMessage' 2022-05-31 09:09:08 +02:00
CalDescent
e393150e9c Require references to be the correct length post feature-trigger 2022-05-30 22:40:39 +02:00
CalDescent
43bfd28bcd Revert "Discard unsupported messages instead of disconnecting the peer."
This reverts commit d086ade91f.
2022-05-30 21:46:16 +02:00
catbref
ca8f8a59f4 Better forwards compatibility with newer message types so we don't disconnect newer peers 2022-05-30 20:41:45 +01:00
CalDescent
d086ade91f Discard unsupported messages instead of disconnecting the peer. 2022-05-30 15:27:42 +02:00
CalDescent
64d4c458ec Fixed logging error 2022-05-30 15:16:44 +02:00
CalDescent
9f19a042e6 Added test to ensure short (1 byte) references can be imported. 2022-05-30 13:46:00 +02:00
CalDescent
922ffcc0be Modified post-trigger last reference checking, to now require a non-null value
This allows for compatibility with TRANSFER_PRIVS validation in commit 8950bb7, which treats any account with a non-null reference as "existing". It also avoids possible unknown side effects from trying to process and store transactions with a null reference - something that wouldn't have been possible until the validation was removed.
2022-05-30 13:22:15 +02:00
CalDescent
f887fcafe3 Disable last reference validation after feature trigger timestamp (not yet set).
This should prevent the failed transactions that are encountered when issuing two or more in a short space of time. Using a feature trigger (hard fork) to release this, to avoid potential consensus confusion around the time of the update (older versions could consider the main chain invalid until updating).
2022-05-30 13:06:55 +02:00
CalDescent
f7dabcaeb0 Increase ONLINE_ACCOUNTS_MODULUS from 5 to 30 mins at a future undecided timestamp.
Note: it's important that this timestamp is set on a 1-hour boundary (such as 16:00:00) to ensure a clean switchover.

# Conflicts:
#	src/main/java/org/qortal/block/BlockChain.java
2022-05-02 08:50:08 +01:00
92 changed files with 6029 additions and 1109 deletions

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{BAC69595-0A3F-4B9F-AB21-5DAC9F288F27} 1049:{4ADB3829-2ECB-4DB0-B114-4068417B6F01} 2052:{B830ACAC-079B-49A9-8271-25D89719826C} 2057:{3F18FD2A-1F0F-4A0A-96B7-21DB829CE7EB} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{BB0A4CEC-D437-4D1B-9569-44CD644D073A} 1049:{A3131099-E63A-4926-A449-6A394E2B895A} 2052:{39D2F92F-C2EE-43EA-BEC9-870333921BC0} 2057:{FE9EFD1A-E4DF-4F02-A049-6A0FBFA2AAEF} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="3.3.1" Type="32"/>
<ROW Property="ProductVersion" Value="3.4.2" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{BD915701-2111-4453-8154-1E88163F5548}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{9CE2B8C0-C0A1-481A-8A34-48830F2785EE}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>

View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>3.3.2</version>
<version>3.4.3</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>

View File

@@ -8,6 +8,7 @@ import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;
import java.security.Security;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -18,6 +19,8 @@ import org.qortal.api.ApiRequest;
import org.qortal.controller.AutoUpdate;
import org.qortal.settings.Settings;
import static org.qortal.controller.AutoUpdate.AGENTLIB_JVM_HOLDER_ARG;
public class ApplyUpdate {
static {
@@ -197,6 +200,11 @@ public class ApplyUpdate {
// JVM arguments
javaCmd.addAll(ManagementFactory.getRuntimeMXBean().getInputArguments());
// Reapply any retained, but disabled, -agentlib JVM arg
javaCmd = javaCmd.stream()
.map(arg -> arg.replace(AGENTLIB_JVM_HOLDER_ARG, "-agentlib"))
.collect(Collectors.toList());
// Call mainClass in JAR
javaCmd.addAll(Arrays.asList("-jar", JAR_FILENAME));
@@ -205,7 +213,7 @@ public class ApplyUpdate {
}
try {
LOGGER.info(() -> String.format("Restarting node with: %s", String.join(" ", javaCmd)));
LOGGER.info(String.format("Restarting node with: %s", String.join(" ", javaCmd)));
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd);
@@ -214,8 +222,15 @@ public class ApplyUpdate {
processBuilder.environment().put(JAVA_TOOL_OPTIONS_NAME, JAVA_TOOL_OPTIONS_VALUE);
}
processBuilder.start();
} catch (IOException e) {
// New process will inherit our stdout and stderr
processBuilder.redirectOutput(ProcessBuilder.Redirect.INHERIT);
processBuilder.redirectError(ProcessBuilder.Redirect.INHERIT);
Process process = processBuilder.start();
// Nothing to pipe to new process, so close output stream (process's stdin)
process.getOutputStream().close();
} catch (Exception e) {
LOGGER.error(String.format("Failed to restart node (BAD): %s", e.getMessage()));
}
}

View File

@@ -11,15 +11,15 @@ public class PrivateKeyAccount extends PublicKeyAccount {
private final Ed25519PrivateKeyParameters edPrivateKeyParams;
/**
* Create PrivateKeyAccount using byte[32] seed.
* Create PrivateKeyAccount using byte[32] private key.
*
* @param seed
* @param privateKey
* byte[32] used to create private/public key pair
* @throws IllegalArgumentException
* if passed invalid seed
* if passed invalid privateKey
*/
public PrivateKeyAccount(Repository repository, byte[] seed) {
this(repository, new Ed25519PrivateKeyParameters(seed, 0));
public PrivateKeyAccount(Repository repository, byte[] privateKey) {
this(repository, new Ed25519PrivateKeyParameters(privateKey, 0));
}
private PrivateKeyAccount(Repository repository, Ed25519PrivateKeyParameters edPrivateKeyParams) {
@@ -37,10 +37,6 @@ public class PrivateKeyAccount extends PublicKeyAccount {
return this.privateKey;
}
public static byte[] toPublicKey(byte[] seed) {
return new Ed25519PrivateKeyParameters(seed, 0).generatePublicKey().getEncoded();
}
public byte[] sign(byte[] message) {
return Crypto.sign(this.edPrivateKeyParams, message);
}

View File

@@ -4,6 +4,7 @@ import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.controller.Synchronizer;
import org.qortal.network.Network;
@@ -21,7 +22,7 @@ public class NodeStatus {
public final int height;
public NodeStatus() {
this.isMintingPossible = Controller.getInstance().isMintingPossible();
this.isMintingPossible = OnlineAccountsManager.getInstance().hasActiveOnlineAccountSignatures();
this.syncPercent = Synchronizer.getInstance().getSyncPercent();
this.isSynchronizing = Synchronizer.getInstance().isSynchronizing();

View File

@@ -125,12 +125,12 @@ public class AdminResource {
}
private String getNodeType() {
if (Settings.getInstance().isTopOnly()) {
return "topOnly";
}
else if (Settings.getInstance().isLite()) {
if (Settings.getInstance().isLite()) {
return "lite";
}
else if (Settings.getInstance().isTopOnly()) {
return "topOnly";
}
else {
return "full";
}

View File

@@ -57,6 +57,7 @@ import org.qortal.transform.TransformationException;
import org.qortal.transform.transaction.ArbitraryTransactionTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.ZipUtils;
@Path("/arbitrary")
@@ -1099,7 +1100,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, error);
}
if (!Controller.getInstance().isUpToDate()) {
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
}

View File

@@ -114,7 +114,7 @@ public class BlocksResource {
@Path("/signature/{signature}/data")
@Operation(
summary = "Fetch serialized, base58 encoded block data using base58 signature",
description = "Returns serialized data for the block that matches the given signature",
description = "Returns serialized data for the block that matches the given signature, and an optional block serialization version",
responses = {
@ApiResponse(
description = "the block data",
@@ -125,7 +125,7 @@ public class BlocksResource {
@ApiErrors({
ApiError.INVALID_SIGNATURE, ApiError.BLOCK_UNKNOWN, ApiError.INVALID_DATA, ApiError.REPOSITORY_ISSUE
})
public String getSerializedBlockData(@PathParam("signature") String signature58) {
public String getSerializedBlockData(@PathParam("signature") String signature58, @QueryParam("version") Integer version) {
// Decode signature
byte[] signature;
try {
@@ -136,20 +136,41 @@ public class BlocksResource {
try (final Repository repository = RepositoryManager.getRepository()) {
// Default to version 1
if (version == null) {
version = 1;
}
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
switch (version) {
case 1:
bytes.write(BlockTransformer.toBytes(block));
break;
case 2:
bytes.write(BlockTransformer.toBytesV2(block));
break;
default:
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
return Base58.encode(bytes.toByteArray());
}
// Not found, so try the block archive
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, false, repository);
if (bytes != null) {
return Base58.encode(bytes);
if (version != 1) {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Archived blocks require version 1");
}
return Base58.encode(bytes);
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);

View File

@@ -42,6 +42,7 @@ import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
@Path("/crosschain/tradebot")
@Tag(name = "Cross-Chain (Trade-Bot)")
@@ -137,7 +138,8 @@ public class CrossChainTradeBotResource {
if (tradeBotCreateRequest.qortAmount <= 0 || tradeBotCreateRequest.fundingQortAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.ORDER_SIZE_TOO_SMALL);
if (!Controller.getInstance().isUpToDate())
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -198,7 +200,8 @@ public class CrossChainTradeBotResource {
if (tradeBotRespondRequest.receivingAddress == null || !Crypto.isValidAddress(tradeBotRespondRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (!Controller.getInstance().isUpToDate())
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);
// Extract data from cross-chain trading AT

View File

@@ -723,9 +723,9 @@ public class TransactionsResource {
ApiError.BLOCKCHAIN_NEEDS_SYNC, ApiError.INVALID_SIGNATURE, ApiError.INVALID_DATA, ApiError.TRANSFORMATION_ERROR, ApiError.REPOSITORY_ISSUE
})
public String processTransaction(String rawBytes58) {
// Only allow a transaction to be processed if our latest block is less than 30 minutes old
// Only allow a transaction to be processed if our latest block is less than 60 minutes old
// If older than this, we should first wait until the blockchain is synced
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCKCHAIN_NEEDS_SYNC);

View File

@@ -3,10 +3,14 @@ package org.qortal.block;
import static java.util.Arrays.stream;
import static java.util.stream.Collectors.toMap;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.nio.charset.StandardCharsets;
import java.text.DecimalFormat;
import java.text.MessageFormat;
import java.text.NumberFormat;
import java.util.*;
import java.util.stream.Collectors;
@@ -24,6 +28,7 @@ import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.block.BlockChain.AccountLevelShareBin;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
import org.qortal.data.account.EligibleQoraHolderData;
@@ -118,6 +123,8 @@ public class Block {
/** Remote/imported/loaded AT states */
protected List<ATStateData> atStates;
/** Remote hash of AT states - in lieu of full AT state data in {@code atStates} */
protected byte[] atStatesHash;
/** Locally-generated AT states */
protected List<ATStateData> ourAtStates;
/** Locally-generated AT fees */
@@ -192,6 +199,11 @@ public class Block {
}
public boolean hasShareBin(AccountLevelShareBin shareBin, int blockHeight) {
AccountLevelShareBin ourShareBin = this.getShareBin(blockHeight);
return ourShareBin != null && shareBin.id == ourShareBin.id;
}
public long distribute(long accountAmount, Map<String, Long> balanceChanges) {
if (this.isRecipientAlsoMinter) {
// minter & recipient the same - simpler case
@@ -216,11 +228,10 @@ public class Block {
return accountAmount;
}
}
/** Always use getExpandedAccounts() to access this, as it's lazy-instantiated. */
private List<ExpandedAccount> cachedExpandedAccounts = null;
/** Opportunistic cache of this block's valid online accounts. Only created by call to isValid(). */
private List<OnlineAccountData> cachedValidOnlineAccounts = null;
/** Opportunistic cache of this block's valid online reward-shares. Only created by call to isValid(). */
private List<RewardShareData> cachedOnlineRewardShares = null;
@@ -255,7 +266,7 @@ public class Block {
* Constructs new Block using passed transaction and AT states.
* <p>
* This constructor typically used when receiving a serialized block over the network.
*
*
* @param repository
* @param blockData
* @param transactions
@@ -281,6 +292,35 @@ public class Block {
this.blockData.setTotalFees(totalFees);
}
/**
* Constructs new Block using passed transaction and minimal AT state info.
* <p>
* This constructor typically used when receiving a serialized block over the network.
*
* @param repository
* @param blockData
* @param transactions
* @param atStatesHash
*/
public Block(Repository repository, BlockData blockData, List<TransactionData> transactions, byte[] atStatesHash) {
this(repository, blockData);
this.transactions = new ArrayList<>();
long totalFees = 0;
// We have to sum fees too
for (TransactionData transactionData : transactions) {
this.transactions.add(Transaction.fromData(repository, transactionData));
totalFees += transactionData.getFee();
}
this.atStatesHash = atStatesHash;
totalFees += this.blockData.getATFees();
this.blockData.setTotalFees(totalFees);
}
/**
* Constructs new Block with empty transaction list, using passed minter account.
*
@@ -313,18 +353,21 @@ public class Block {
int version = parentBlock.getNextBlockVersion();
byte[] reference = parentBlockData.getSignature();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, minter.getPublicKey());
if (minterLevel == 0) {
LOGGER.error("Minter effective level returned zero?");
return null;
}
// Find newest online accounts timestamp
long onlineAccountsTimestamp = 0;
for (OnlineAccountData onlineAccountData : onlineAccounts) {
if (onlineAccountData.getTimestamp() > onlineAccountsTimestamp)
onlineAccountsTimestamp = onlineAccountData.getTimestamp();
long timestamp = calcTimestamp(parentBlockData, minter.getPublicKey(), minterLevel);
long onlineAccountsTimestamp = OnlineAccountsManager.getCurrentOnlineAccountTimestamp();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts(onlineAccountsTimestamp);
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
return null;
}
// Load sorted list of reward share public keys into memory, so that the indexes can be obtained.
@@ -335,10 +378,6 @@ public class Block {
// Map using index into sorted list of reward-shares as key
Map<Integer, OnlineAccountData> indexedOnlineAccounts = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
// Disregard online accounts with different timestamps
if (onlineAccountData.getTimestamp() != onlineAccountsTimestamp)
continue;
Integer accountIndex = getRewardShareIndex(onlineAccountData.getPublicKey(), allRewardSharePublicKeys);
if (accountIndex == null)
// Online account (reward-share) with current timestamp but reward-share cancelled
@@ -355,26 +394,29 @@ public class Block {
byte[] encodedOnlineAccounts = BlockTransformer.encodeOnlineAccounts(onlineAccountsSet);
int onlineAccountsCount = onlineAccountsSet.size();
// Concatenate online account timestamp signatures (in correct order)
byte[] onlineAccountsSignatures = new byte[onlineAccountsCount * Transformer.SIGNATURE_LENGTH];
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
System.arraycopy(onlineAccountData.getSignature(), 0, onlineAccountsSignatures, i * Transformer.SIGNATURE_LENGTH, Transformer.SIGNATURE_LENGTH);
byte[] onlineAccountsSignatures;
if (timestamp >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// Collate all signatures
Collection<byte[]> signaturesToAggregate = indexedOnlineAccounts.values()
.stream()
.map(OnlineAccountData::getSignature)
.collect(Collectors.toList());
// Aggregated, single signature
onlineAccountsSignatures = Qortal25519Extras.aggregateSignatures(signaturesToAggregate);
} else {
// Concatenate online account timestamp signatures (in correct order)
onlineAccountsSignatures = new byte[onlineAccountsCount * Transformer.SIGNATURE_LENGTH];
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
System.arraycopy(onlineAccountData.getSignature(), 0, onlineAccountsSignatures, i * Transformer.SIGNATURE_LENGTH, Transformer.SIGNATURE_LENGTH);
}
}
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData,
minter.getPublicKey(), encodedOnlineAccounts));
// Qortal: minter is always a reward-share, so find actual minter and get their effective minting level
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, minter.getPublicKey());
if (minterLevel == 0) {
LOGGER.error("Minter effective level returned zero?");
return null;
}
long timestamp = calcTimestamp(parentBlockData, minter.getPublicKey(), minterLevel);
int transactionCount = 0;
byte[] transactionsSignature = null;
int height = parentBlockData.getHeight() + 1;
@@ -979,49 +1021,59 @@ public class Block {
if (this.blockData.getOnlineAccountsSignatures() == null || this.blockData.getOnlineAccountsSignatures().length == 0)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MISSING;
if (this.blockData.getOnlineAccountsSignatures().length != onlineRewardShares.size() * Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// We expect just the one, aggregated signature
if (this.blockData.getOnlineAccountsSignatures().length != Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
} else {
if (this.blockData.getOnlineAccountsSignatures().length != onlineRewardShares.size() * Transformer.SIGNATURE_LENGTH)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
}
// Check signatures
long onlineTimestamp = this.blockData.getOnlineAccountsTimestamp();
byte[] onlineTimestampBytes = Longs.toByteArray(onlineTimestamp);
// If this block is much older than current online timestamp, then there's no point checking current online accounts
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS
? null
: OnlineAccountsManager.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = OnlineAccountsManager.getInstance().getLatestBlocksOnlineAccounts();
// Extract online accounts' timestamp signatures from block data
// Extract online accounts' timestamp signatures from block data. Only one signature if aggregated.
List<byte[]> onlineAccountsSignatures = BlockTransformer.decodeTimestampSignatures(this.blockData.getOnlineAccountsSignatures());
// We'll build up a list of online accounts to hand over to Controller if block is added to chain
// and this will become latestBlocksOnlineAccounts (above) to reduce CPU load when we process next block...
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getAggregateSignatureTimestamp()) {
// Aggregate all public keys
Collection<byte[]> publicKeys = onlineRewardShares.stream()
.map(RewardShareData::getRewardSharePublicKey)
.collect(Collectors.toList());
for (int i = 0; i < onlineAccountsSignatures.size(); ++i) {
byte[] signature = onlineAccountsSignatures.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
byte[] aggregatePublicKey = Qortal25519Extras.aggregatePublicKeys(publicKeys);
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, signature, publicKey);
ourOnlineAccounts.add(onlineAccountData);
byte[] aggregateSignature = onlineAccountsSignatures.get(0);
// If signature is still current then no need to perform Ed25519 verify
if (currentOnlineAccounts != null && currentOnlineAccounts.remove(onlineAccountData))
// remove() returned true, so online account still current
// and one less entry in currentOnlineAccounts to check next time
continue;
// If signature was okay in latest block then no need to perform Ed25519 verify
if (latestBlocksOnlineAccounts != null && latestBlocksOnlineAccounts.contains(onlineAccountData))
continue;
if (!Crypto.verify(publicKey, signature, onlineTimestampBytes))
// One-step verification of aggregate signature using aggregate public key
if (!Qortal25519Extras.verifyAggregated(aggregatePublicKey, aggregateSignature, onlineTimestampBytes))
return ValidationResult.ONLINE_ACCOUNT_SIGNATURE_INCORRECT;
} else {
// Build block's view of online accounts
Set<OnlineAccountData> onlineAccounts = new HashSet<>();
for (int i = 0; i < onlineAccountsSignatures.size(); ++i) {
byte[] signature = onlineAccountsSignatures.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, signature, publicKey);
onlineAccounts.add(onlineAccountData);
}
// Remove those already validated & cached by online accounts manager - no need to re-validate them
OnlineAccountsManager.getInstance().removeKnown(onlineAccounts, onlineTimestamp);
// Validate the rest
for (OnlineAccountData onlineAccount : onlineAccounts)
if (!Crypto.verify(onlineAccount.getPublicKey(), onlineAccount.getSignature(), onlineTimestampBytes))
return ValidationResult.ONLINE_ACCOUNT_SIGNATURE_INCORRECT;
// We've validated these, so allow online accounts manager to cache
OnlineAccountsManager.getInstance().addBlocksOnlineAccounts(onlineAccounts, onlineTimestamp);
}
// All online accounts valid, so save our list of online accounts for potential later use
this.cachedValidOnlineAccounts = ourOnlineAccounts;
this.cachedOnlineRewardShares = onlineRewardShares;
return ValidationResult.OK;
@@ -1194,7 +1246,7 @@ public class Block {
*/
private ValidationResult areAtsValid() throws DataException {
// Locally generated AT states should be valid so no need to re-execute them
if (this.ourAtStates == this.getATStates()) // Note object reference compare
if (this.ourAtStates != null && this.ourAtStates == this.atStates) // Note object reference compare
return ValidationResult.OK;
// Generate local AT states for comparison
@@ -1208,8 +1260,33 @@ public class Block {
if (this.ourAtFees != this.blockData.getATFees())
return ValidationResult.AT_STATES_MISMATCH;
// Note: this.atStates fully loaded thanks to this.getATStates() call above
for (int s = 0; s < this.atStates.size(); ++s) {
// If we have a single AT states hash then compare that in preference
if (this.atStatesHash != null) {
int atBytesLength = blockData.getATCount() * BlockTransformer.AT_ENTRY_LENGTH;
ByteArrayOutputStream atHashBytes = new ByteArrayOutputStream(atBytesLength);
try {
for (ATStateData atStateData : this.ourAtStates) {
atHashBytes.write(atStateData.getATAddress().getBytes(StandardCharsets.UTF_8));
atHashBytes.write(atStateData.getStateHash());
atHashBytes.write(Longs.toByteArray(atStateData.getFees()));
}
} catch (IOException e) {
throw new DataException("Couldn't validate AT states hash due to serialization issue?", e);
}
byte[] ourAtStatesHash = Crypto.digest(atHashBytes.toByteArray());
if (!Arrays.equals(ourAtStatesHash, this.atStatesHash))
return ValidationResult.AT_STATES_MISMATCH;
// Use our AT state data from now on
this.atStates = this.ourAtStates;
return ValidationResult.OK;
}
// Note: this.atStates fully loaded thanks to this.getATStates() call:
this.getATStates();
for (int s = 0; s < this.ourAtStates.size(); ++s) {
ATStateData ourAtState = this.ourAtStates.get(s);
ATStateData theirAtState = this.atStates.get(s);
@@ -1367,9 +1444,6 @@ public class Block {
postBlockTidy();
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
OnlineAccountsManager.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
}
@@ -1585,9 +1659,6 @@ public class Block {
this.blockData.setHeight(null);
postBlockTidy();
// Remove any cached, valid online accounts data from Controller
OnlineAccountsManager.getInstance().popLatestBlocksOnlineAccounts();
}
protected void orphanTransactionsFromBlock() throws DataException {
@@ -1825,12 +1896,67 @@ public class Block {
final boolean haveFounders = !onlineFounderAccounts.isEmpty();
// Determine reward candidates based on account level
List<AccountLevelShareBin> accountLevelShareBins = BlockChain.getInstance().getAccountLevelShareBins();
for (int binIndex = 0; binIndex < accountLevelShareBins.size(); ++binIndex) {
// Find all accounts in share bin. getShareBin() returns null for minter accounts that are also founders, so they are effectively filtered out.
// This needs a deep copy, so the shares can be modified when tiers aren't activated yet
List<AccountLevelShareBin> accountLevelShareBins = new ArrayList<>();
for (AccountLevelShareBin accountLevelShareBin : BlockChain.getInstance().getAccountLevelShareBins()) {
accountLevelShareBins.add((AccountLevelShareBin) accountLevelShareBin.clone());
}
Map<Integer, List<ExpandedAccount>> accountsForShareBin = new HashMap<>();
// We might need to combine some share bins if they haven't reached the minimum number of minters yet
for (int binIndex = accountLevelShareBins.size()-1; binIndex >= 0; --binIndex) {
AccountLevelShareBin accountLevelShareBin = accountLevelShareBins.get(binIndex);
// Object reference compare is OK as all references are read-only from blockchain config.
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.getShareBin(this.blockData.getHeight()) == accountLevelShareBin).collect(Collectors.toList());
// Find all accounts in share bin. getShareBin() returns null for minter accounts that are also founders, so they are effectively filtered out.
List<ExpandedAccount> binnedAccounts = expandedAccounts.stream().filter(accountInfo -> accountInfo.hasShareBin(accountLevelShareBin, this.blockData.getHeight())).collect(Collectors.toList());
// Add any accounts that have been moved down from a higher tier
List<ExpandedAccount> existingBinnedAccounts = accountsForShareBin.get(binIndex);
if (existingBinnedAccounts != null)
binnedAccounts.addAll(existingBinnedAccounts);
// Logic below may only apply to higher levels, and only for share bins with a specific range of online accounts
if (accountLevelShareBin.levels.get(0) < BlockChain.getInstance().getShareBinActivationMinLevel() ||
binnedAccounts.isEmpty() || binnedAccounts.size() >= BlockChain.getInstance().getMinAccountsToActivateShareBin()) {
// Add all accounts for this share bin to the accountsForShareBin list
accountsForShareBin.put(binIndex, binnedAccounts);
continue;
}
// Share bin contains more than one, but less than the minimum number of minters. We treat this share bin
// as not activated yet. In these cases, the rewards and minters are combined and paid out to the previous
// share bin, to prevent a single or handful of accounts receiving the entire rewards for a share bin.
//
// Example:
//
// - Share bin for levels 5 and 6 has 100 minters
// - Share bin for levels 7 and 8 has 10 minters
//
// This is below the minimum of 30, so share bins are reconstructed as follows:
//
// - Share bin for levels 5 and 6 now contains 110 minters
// - Share bin for levels 7 and 8 now contains 0 minters
// - Share bin for levels 5 and 6 now pays out rewards for levels 5, 6, 7, and 8
// - Share bin for levels 7 and 8 pays zero rewards
//
// This process is iterative, so will combine several tiers if needed.
// Designate this share bin as empty
accountsForShareBin.put(binIndex, new ArrayList<>());
// Move the accounts originally intended for this share bin to the previous one
accountsForShareBin.put(binIndex - 1, binnedAccounts);
// Move the block reward from this share bin to the previous one
AccountLevelShareBin previousShareBin = accountLevelShareBins.get(binIndex - 1);
previousShareBin.share += accountLevelShareBin.share;
accountLevelShareBin.share = 0L;
}
// Now loop through (potentially modified) share bins and determine the reward candidates
for (int binIndex = 0; binIndex < accountLevelShareBins.size(); ++binIndex) {
AccountLevelShareBin accountLevelShareBin = accountLevelShareBins.get(binIndex);
List<ExpandedAccount> binnedAccounts = accountsForShareBin.get(binIndex);
// No online accounts in this bin? Skip to next one
if (binnedAccounts.isEmpty())

View File

@@ -68,9 +68,12 @@ public class BlockChain {
atFindNextTransactionFix,
newBlockSigHeight,
shareBinFix,
rewardShareLimitTimestamp,
calcChainWeightTimestamp,
transactionV5Timestamp,
transactionV6Timestamp;
transactionV6Timestamp,
disableReferenceTimestamp,
aggregateSignatureTimestamp;
}
// Custom transaction fees
@@ -101,10 +104,23 @@ public class BlockChain {
private List<RewardByHeight> rewardsByHeight;
/** Share of block reward/fees by account level */
public static class AccountLevelShareBin {
public static class AccountLevelShareBin implements Cloneable {
public int id;
public List<Integer> levels;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public long share;
public Object clone() {
AccountLevelShareBin shareBinCopy = new AccountLevelShareBin();
List<Integer> levelsCopy = new ArrayList<>();
for (Integer level : this.levels) {
levelsCopy.add(level);
}
shareBinCopy.id = this.id;
shareBinCopy.levels = levelsCopy;
shareBinCopy.share = this.share;
return shareBinCopy;
}
}
private List<AccountLevelShareBin> sharesByLevel;
/** Generated lookup of share-bin by account level */
@@ -118,6 +134,12 @@ public class BlockChain {
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private Long qoraPerQortReward;
/** Minimum number of accounts before a share bin is considered activated */
private int minAccountsToActivateShareBin;
/** Min level at which share bin activation takes place; lower levels allow less than minAccountsPerShareBin */
private int shareBinActivationMinLevel;
/**
* Number of minted blocks required to reach next level from previous.
* <p>
@@ -155,7 +177,7 @@ public class BlockChain {
private int minAccountLevelToMint;
private int minAccountLevelForBlockSubmissions;
private int minAccountLevelToRewardShare;
private int maxRewardSharesPerMintingAccount;
private int maxRewardSharesPerFounderMintingAccount;
private int founderEffectiveMintingLevel;
/** Minimum time to retain online account signatures (ms) for block validity checks. */
@@ -163,6 +185,17 @@ public class BlockChain {
/** Maximum time to retain online account signatures (ms) for block validity checks, to allow for clock variance. */
private long onlineAccountSignaturesMaxLifetime;
/** Feature trigger timestamp for ONLINE_ACCOUNTS_MODULUS time interval increase. Can't use
* featureTriggers because unit tests need to set this value via Reflection. */
private long onlineAccountsModulusV2Timestamp;
/** Max reward shares by block height */
public static class MaxRewardSharesByTimestamp {
public long timestamp;
public int maxShares;
}
private List<MaxRewardSharesByTimestamp> maxRewardSharesByTimestamp;
/** Settings relating to CIYAM AT feature. */
public static class CiyamAtSettings {
/** Fee per step/op-code executed. */
@@ -311,6 +344,11 @@ public class BlockChain {
return this.maxBlockSize;
}
// Online accounts
public long getOnlineAccountsModulusV2Timestamp() {
return this.onlineAccountsModulusV2Timestamp;
}
/** Returns true if approval-needing transaction types require a txGroupId other than NO_GROUP. */
public boolean getRequireGroupForApproval() {
return this.requireGroupForApproval;
@@ -352,6 +390,14 @@ public class BlockChain {
return this.qoraPerQortReward;
}
public int getMinAccountsToActivateShareBin() {
return this.minAccountsToActivateShareBin;
}
public int getShareBinActivationMinLevel() {
return this.shareBinActivationMinLevel;
}
public int getMinAccountLevelToMint() {
return this.minAccountLevelToMint;
}
@@ -364,8 +410,8 @@ public class BlockChain {
return this.minAccountLevelToRewardShare;
}
public int getMaxRewardSharesPerMintingAccount() {
return this.maxRewardSharesPerMintingAccount;
public int getMaxRewardSharesPerFounderMintingAccount() {
return this.maxRewardSharesPerFounderMintingAccount;
}
public int getFounderEffectiveMintingLevel() {
@@ -398,6 +444,10 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.shareBinFix.name()).intValue();
}
public long getRewardShareLimitTimestamp() {
return this.featureTriggers.get(FeatureTrigger.rewardShareLimitTimestamp.name()).longValue();
}
public long getCalcChainWeightTimestamp() {
return this.featureTriggers.get(FeatureTrigger.calcChainWeightTimestamp.name()).longValue();
}
@@ -410,6 +460,14 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.transactionV6Timestamp.name()).longValue();
}
public long getDisableReferenceTimestamp() {
return this.featureTriggers.get(FeatureTrigger.disableReferenceTimestamp.name()).longValue();
}
public long getAggregateSignatureTimestamp() {
return this.featureTriggers.get(FeatureTrigger.aggregateSignatureTimestamp.name()).longValue();
}
// More complex getters for aspects that change by height or timestamp
public long getRewardAtHeight(int ourHeight) {
@@ -438,6 +496,14 @@ public class BlockChain {
return this.getUnitFee();
}
public int getMaxRewardSharesAtTimestamp(long ourTimestamp) {
for (int i = maxRewardSharesByTimestamp.size() - 1; i >= 0; --i)
if (maxRewardSharesByTimestamp.get(i).timestamp <= ourTimestamp)
return maxRewardSharesByTimestamp.get(i).maxShares;
return 0;
}
/** Validate blockchain config read from JSON */
private void validateConfig() {
if (this.genesisInfo == null)

View File

@@ -15,6 +15,7 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -40,6 +41,7 @@ public class AutoUpdate extends Thread {
public static final String JAR_FILENAME = "qortal.jar";
public static final String NEW_JAR_FILENAME = "new-" + JAR_FILENAME;
public static final String AGENTLIB_JVM_HOLDER_ARG = "-DQORTAL_agentlib=";
private static final Logger LOGGER = LogManager.getLogger(AutoUpdate.class);
private static final long CHECK_INTERVAL = 20 * 60 * 1000L; // ms
@@ -243,6 +245,11 @@ public class AutoUpdate extends Thread {
// JVM arguments
javaCmd.addAll(ManagementFactory.getRuntimeMXBean().getInputArguments());
// Disable, but retain, any -agentlib JVM arg as sub-process might fail if it tries to reuse same port
javaCmd = javaCmd.stream()
.map(arg -> arg.replace("-agentlib", AGENTLIB_JVM_HOLDER_ARG))
.collect(Collectors.toList());
// Remove JNI options as they won't be supported by command-line 'java'
// These are typically added by the AdvancedInstaller Java launcher EXE
javaCmd.removeAll(Arrays.asList("abort", "exit", "vfprintf"));
@@ -261,10 +268,19 @@ public class AutoUpdate extends Thread {
Translator.INSTANCE.translate("SysTray", "APPLYING_UPDATE_AND_RESTARTING"),
MessageType.INFO);
new ProcessBuilder(javaCmd).start();
ProcessBuilder processBuilder = new ProcessBuilder(javaCmd);
// New process will inherit our stdout and stderr
processBuilder.redirectOutput(ProcessBuilder.Redirect.INHERIT);
processBuilder.redirectError(ProcessBuilder.Redirect.INHERIT);
Process process = processBuilder.start();
// Nothing to pipe to new process, so close output stream (process's stdin)
process.getOutputStream().close();
return true; // applying update OK
} catch (IOException e) {
} catch (Exception e) {
LOGGER.error(String.format("Failed to apply update: %s", e.getMessage()));
try {

View File

@@ -65,9 +65,8 @@ public class BlockMinter extends Thread {
// Lite nodes do not mint
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
if (Settings.getInstance().getWipeUnconfirmedOnStart()) {
if (Settings.getInstance().getWipeUnconfirmedOnStart()) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Wipe existing unconfirmed transactions
List<TransactionData> unconfirmedTransactions = repository.getTransactionRepository().getUnconfirmedTransactions();
@@ -77,356 +76,375 @@ public class BlockMinter extends Thread {
}
repository.saveChanges();
} catch (DataException e) {
LOGGER.warn("Repository issue trying to wipe unconfirmed transactions on start-up: {}", e.getMessage());
// Fall-through to normal behaviour in case we can recover
}
}
BlockData previousBlockData = null;
// Vars to keep track of blocks that were skipped due to chain weight
byte[] parentSignatureForLastLowWeightBlock = null;
Long timeOfLastLowWeightBlock = null;
List<Block> newBlocks = new ArrayList<>();
try (final Repository repository = RepositoryManager.getRepository()) {
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
BlockData previousBlockData = null;
// Vars to keep track of blocks that were skipped due to chain weight
byte[] parentSignatureForLastLowWeightBlock = null;
Long timeOfLastLowWeightBlock = null;
List<Block> newBlocks = new ArrayList<>();
// Flags for tracking change in whether minting is possible,
// so we can notify Controller, and further update SysTray, etc.
boolean isMintingPossible = false;
boolean wasMintingPossible = isMintingPossible;
while (running) {
repository.discardChanges(); // Free repository locks, if any
if (isMintingPossible != wasMintingPossible)
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
wasMintingPossible = isMintingPossible;
// Sleep for a while
Thread.sleep(1000);
isMintingPossible = false;
final Long now = NTP.getTime();
if (now == null)
continue;
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
continue;
// No online accounts? (e.g. during startup)
if (OnlineAccountsManager.getInstance().getOnlineAccounts().isEmpty())
continue;
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
continue;
}
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (Synchronizer.getInstance().getRecoveryMode() == false)
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
}
}
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (Synchronizer.getInstance().getRecoveryMode() == false && recoverInvalidBlock == false)
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
// Reduce log timeout
logTimeout = 10 * 1000L;
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
final byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
final boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.debug("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't build a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
} else {
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
}
}
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
boolean newBlockMinted = false;
Block newBlock = null;
try {
// Clear repository session state so we have latest view of data
// Free up any repository locks
repository.discardChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
// Sleep for a while
Thread.sleep(1000);
isMintingPossible = false;
final Long now = NTP.getTime();
if (now == null)
continue;
List<Block> goodBlocks = new ArrayList<>();
for (Block testBlock : newBlocks) {
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
continue;
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
continue;
testBlock.preProcess();
// No online accounts for current timestamp? (e.g. during startup)
if (!OnlineAccountsManager.getInstance().hasOnlineAccounts())
continue;
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
goodBlocks.add(testBlock);
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
if (goodBlocks.isEmpty())
continue;
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
continue;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (Synchronizer.getInstance().getRecoveryMode() == false)
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
}
}
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30*1000L) {
LOGGER.debug("Higher weight chain found in peers, so not signing a block this round");
LOGGER.debug("Time since detected: {}ms", NTP.getTime() - timeOfLastLowWeightBlock);
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (Synchronizer.getInstance().getRecoveryMode() == false && recoverInvalidBlock == false)
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
// Reduce log timeout
logTimeout = 10 * 1000L;
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.info("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't build a to-be-minted block"));
continue;
}
else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.debug("More than 30 seconds passed, so proceeding to submit block candidate...");
newBlocks.add(newBlock);
} else {
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
}
else {
LOGGER.debug("No higher weight chain found in peers");
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
boolean newBlockMinted = false;
Block newBlock = null;
try {
newBlock.process();
// Clear repository session state so we have latest view of data
repository.discardChanges();
repository.saveChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
continue;
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
List<Block> goodBlocks = new ArrayList<>();
boolean wasInvalidBlockDiscarded = false;
Iterator<Block> newBlocksIterator = newBlocks.iterator();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
while (newBlocksIterator.hasNext()) {
Block testBlock = newBlocksIterator.next();
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
continue;
testBlock.preProcess();
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
newBlocksIterator.remove();
wasInvalidBlockDiscarded = true;
/*
* Bail out fast so that we loop around from the top again.
* This gives BlockMinter the possibility to remint this candidate block using another block from newBlocks,
* via the Blocks.remint() method, which avoids having to re-process Block ATs all over again.
* Particularly useful if some aspect of Blocks changes due a timestamp-based feature-trigger (see BlockChain class).
*/
break;
}
goodBlocks.add(testBlock);
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
if (wasInvalidBlockDiscarded || goodBlocks.isEmpty())
continue;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30 * 1000L) {
LOGGER.info("Higher weight chain found in peers, so not signing a block this round");
LOGGER.info("Time since detected: {}", NTP.getTime() - timeOfLastLowWeightBlock);
continue;
} else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.info("More than 30 seconds passed, so proceeding to submit block candidate...");
}
} else {
LOGGER.debug("No higher weight chain found in peers");
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
try {
newBlock.process();
repository.saveChanges();
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
}
} finally {
blockchainLock.unlock();
}
} finally {
blockchainLock.unlock();
}
if (newBlockMinted) {
// Broadcast our new chain to network
BlockData newBlockData = newBlock.getBlockData();
if (newBlockMinted) {
// Broadcast our new chain to network
BlockData newBlockData = newBlock.getBlockData();
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newBlockData));
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newBlockData));
}
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
}
}
} catch (DataException e) {
LOGGER.warn("Repository issue while running block minter", e);
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
LOGGER.warn("Repository issue while running block minter - NO LONGER MINTING", e);
}
}
@@ -557,18 +575,23 @@ public class BlockMinter extends Thread {
// This peer has common block data
CommonBlockData commonBlockData = peer.getCommonBlockData();
BlockSummaryData commonBlockSummaryData = commonBlockData.getCommonBlockSummary();
if (commonBlockData.getChainWeight() != null) {
if (commonBlockData.getChainWeight() != null && peer.getCommonBlockData().getBlockSummariesAfterCommonBlock() != null) {
// The synchronizer has calculated this peer's chain weight
BigInteger ourChainWeightSinceCommonBlock = this.getOurChainWeightSinceBlock(repository, commonBlockSummaryData, commonBlockData.getBlockSummariesAfterCommonBlock());
BigInteger ourChainWeight = ourChainWeightSinceCommonBlock.add(blockCandidateWeight);
BigInteger peerChainWeight = commonBlockData.getChainWeight();
if (peerChainWeight.compareTo(ourChainWeight) >= 0) {
// This peer has a higher weight chain than ours
LOGGER.debug("Peer {} is on a higher weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
return true;
if (!Synchronizer.getInstance().containsInvalidBlockSummary(peer.getCommonBlockData().getBlockSummariesAfterCommonBlock())) {
// .. and it doesn't hold any invalid blocks
BigInteger ourChainWeightSinceCommonBlock = this.getOurChainWeightSinceBlock(repository, commonBlockSummaryData, commonBlockData.getBlockSummariesAfterCommonBlock());
BigInteger ourChainWeight = ourChainWeightSinceCommonBlock.add(blockCandidateWeight);
BigInteger peerChainWeight = commonBlockData.getChainWeight();
if (peerChainWeight.compareTo(ourChainWeight) >= 0) {
// This peer has a higher weight chain than ours
LOGGER.info("Peer {} is on a higher weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
return true;
} else {
LOGGER.debug("Peer {} is on a lower weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
}
} else {
LOGGER.debug("Peer {} is on a lower weight chain ({}) than ours ({})", peer, formatter.format(peerChainWeight), formatter.format(ourChainWeight));
LOGGER.debug("Peer {} has an invalid block", peer);
}
} else {
LOGGER.debug("Peer {} has no chain weight", peer);

View File

@@ -113,6 +113,7 @@ public class Controller extends Thread {
private long repositoryBackupTimestamp = startTime; // ms
private long repositoryMaintenanceTimestamp = startTime; // ms
private long repositoryCheckpointTimestamp = startTime; // ms
private long prunePeersTimestamp = startTime; // ms
private long ntpCheckTimestamp = startTime; // ms
private long deleteExpiredTimestamp = startTime + DELETE_EXPIRED_INTERVAL; // ms
@@ -552,6 +553,7 @@ public class Controller extends Thread {
final long repositoryBackupInterval = Settings.getInstance().getRepositoryBackupInterval();
final long repositoryCheckpointInterval = Settings.getInstance().getRepositoryCheckpointInterval();
long repositoryMaintenanceInterval = getRandomRepositoryMaintenanceInterval();
final long prunePeersInterval = 5 * 60 * 1000L; // Every 5 minutes
// Start executor service for trimming or pruning
PruneManager.getInstance().start();
@@ -649,10 +651,15 @@ public class Controller extends Thread {
}
// Prune stuck/slow/old peers
try {
Network.getInstance().prunePeers();
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue when trying to prune peers: %s", e.getMessage()));
if (now >= prunePeersTimestamp + prunePeersInterval) {
prunePeersTimestamp = now + prunePeersInterval;
try {
LOGGER.debug("Pruning peers...");
Network.getInstance().prunePeers();
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue when trying to prune peers: %s", e.getMessage()));
}
}
// Delete expired transactions
@@ -787,23 +794,24 @@ public class Controller extends Thread {
String actionText;
// Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading statuses.
// Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Any block in the last 2 hours is considered "up to date" for the purposes of displaying statuses.
// This also aligns with the time interval required for continued online account submission.
final Long minLatestBlockTimestamp = NTP.getTime() - (2 * 60 * 60 * 1000L);
// Only show sync percent if it's less than 100, to avoid confusion
final Integer syncPercent = Synchronizer.getInstance().getSyncPercent();
final boolean isSyncing = (syncPercent != null && syncPercent < 100);
synchronized (Synchronizer.getInstance().syncLock) {
if (Settings.getInstance().isLite()) {
actionText = Translator.INSTANCE.translate("SysTray", "LITE_NODE");
SysTray.getInstance().setTrayIcon(4);
}
else if (this.isMintingPossible) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
SysTray.getInstance().setTrayIcon(2);
}
else if (numberOfPeers < Settings.getInstance().getMinBlockchainPeers()) {
actionText = Translator.INSTANCE.translate("SysTray", "CONNECTING");
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate(minLatestBlockTimestamp) && Synchronizer.getInstance().isSynchronizing()) {
else if (!this.isUpToDate(minLatestBlockTimestamp) && isSyncing) {
actionText = String.format("%s - %d%%", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"), Synchronizer.getInstance().getSyncPercent());
SysTray.getInstance().setTrayIcon(3);
}
@@ -811,6 +819,10 @@ public class Controller extends Thread {
actionText = String.format("%s", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"));
SysTray.getInstance().setTrayIcon(3);
}
else if (OnlineAccountsManager.getInstance().hasActiveOnlineAccountSignatures()) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
SysTray.getInstance().setTrayIcon(2);
}
else {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
SysTray.getInstance().setTrayIcon(4);
@@ -1229,6 +1241,10 @@ public class Controller extends Thread {
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsV2Message(peer, message);
break;
case GET_ONLINE_ACCOUNTS_V3:
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsV3Message(peer, message);
break;
case GET_ARBITRARY_DATA:
// Not currently supported
break;
@@ -1362,6 +1378,18 @@ public class Controller extends Thread {
Block block = new Block(repository, blockData);
// V2 support
if (peer.getPeersVersion() >= BlockV2Message.MIN_PEER_VERSION) {
Message blockMessage = new BlockV2Message(block);
blockMessage.setId(message.getId());
if (!peer.sendMessage(blockMessage)) {
peer.disconnect("failed to send block");
// Don't fall-through to caching because failure to send might be from failure to build message
return;
}
return;
}
CachedBlockMessage blockMessage = new CachedBlockMessage(block);
blockMessage.setId(message.getId());

View File

@@ -173,7 +173,7 @@ public class LiteNode {
}
if (responseMessage == null) {
LOGGER.info("Peer didn't respond to {} message", peer, message.getType());
LOGGER.info("Peer {} didn't respond to {} message", peer, message.getType());
return null;
}
else if (responseMessage.getType() != expectedResponseMessageType) {

View File

@@ -26,14 +26,7 @@ import org.qortal.event.Event;
import org.qortal.event.EventBus;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.BlockMessage;
import org.qortal.network.message.BlockSummariesMessage;
import org.qortal.network.message.GetBlockMessage;
import org.qortal.network.message.GetBlockSummariesMessage;
import org.qortal.network.message.GetSignaturesV2Message;
import org.qortal.network.message.Message;
import org.qortal.network.message.SignaturesMessage;
import org.qortal.network.message.MessageType;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -88,7 +81,7 @@ public class Synchronizer extends Thread {
private boolean syncRequestPending = false;
// Keep track of invalid blocks so that we don't keep trying to sync them
private Map<String, Long> invalidBlockSignatures = Collections.synchronizedMap(new HashMap<>());
private Map<ByteArray, Long> invalidBlockSignatures = Collections.synchronizedMap(new HashMap<>());
public Long timeValidBlockLastReceived = null;
public Long timeInvalidBlockLastReceived = null;
@@ -178,8 +171,8 @@ public class Synchronizer extends Thread {
public Integer getSyncPercent() {
synchronized (this.syncLock) {
// Report as 100% synced if the latest block is within the last 30 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Report as 100% synced if the latest block is within the last 60 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
return 100;
}
@@ -624,7 +617,7 @@ public class Synchronizer extends Thread {
// We have already determined that the correct chain diverged from a lower height. We are safe to skip these peers.
for (Peer peer : peersSharingCommonBlock) {
LOGGER.debug(String.format("Peer %s has common block at height %d but the superior chain is at height %d. Removing it from this round.", peer, commonBlockSummary.getHeight(), dropPeersAfterCommonBlockHeight));
this.addInferiorChainSignature(peer.getChainTipData().getLastBlockSignature());
//this.addInferiorChainSignature(peer.getChainTipData().getLastBlockSignature());
}
continue;
}
@@ -635,7 +628,9 @@ public class Synchronizer extends Thread {
int minChainLength = this.calculateMinChainLengthOfPeers(peersSharingCommonBlock, commonBlockSummary);
// Fetch block summaries from each peer
for (Peer peer : peersSharingCommonBlock) {
Iterator peersSharingCommonBlockIterator = peersSharingCommonBlock.iterator();
while (peersSharingCommonBlockIterator.hasNext()) {
Peer peer = (Peer) peersSharingCommonBlockIterator.next();
// If we're shutting down, just return the latest peer list
if (Controller.isStopping())
@@ -692,6 +687,8 @@ public class Synchronizer extends Thread {
if (this.containsInvalidBlockSummary(peer.getCommonBlockData().getBlockSummariesAfterCommonBlock())) {
LOGGER.debug("Ignoring peer %s because it holds an invalid block", peer);
peers.remove(peer);
peersSharingCommonBlockIterator.remove();
continue;
}
// Reduce minChainLength if needed. If we don't have any blocks, this peer will be excluded from chain weight comparisons later in the process, so we shouldn't update minChainLength
@@ -847,6 +844,10 @@ public class Synchronizer extends Thread {
/* Invalid block signature tracking */
public Map<ByteArray, Long> getInvalidBlockSignatures() {
return this.invalidBlockSignatures;
}
private void addInvalidBlockSignature(byte[] signature) {
Long now = NTP.getTime();
if (now == null) {
@@ -854,8 +855,7 @@ public class Synchronizer extends Thread {
}
// Add or update existing entry
String sig58 = Base58.encode(signature);
invalidBlockSignatures.put(sig58, now);
invalidBlockSignatures.put(ByteArray.wrap(signature), now);
}
private void deleteOlderInvalidSignatures(Long now) {
if (now == null) {
@@ -874,17 +874,16 @@ public class Synchronizer extends Thread {
}
}
}
private boolean containsInvalidBlockSummary(List<BlockSummaryData> blockSummaries) {
public boolean containsInvalidBlockSummary(List<BlockSummaryData> blockSummaries) {
if (blockSummaries == null || invalidBlockSignatures == null) {
return false;
}
// Loop through our known invalid blocks and check each one against supplied block summaries
for (String invalidSignature58 : invalidBlockSignatures.keySet()) {
byte[] invalidSignature = Base58.decode(invalidSignature58);
for (ByteArray invalidSignature : invalidBlockSignatures.keySet()) {
for (BlockSummaryData blockSummary : blockSummaries) {
byte[] signature = blockSummary.getSignature();
if (Arrays.equals(signature, invalidSignature)) {
if (Arrays.equals(signature, invalidSignature.value)) {
return true;
}
}
@@ -897,10 +896,9 @@ public class Synchronizer extends Thread {
}
// Loop through our known invalid blocks and check each one against supplied block signatures
for (String invalidSignature58 : invalidBlockSignatures.keySet()) {
byte[] invalidSignature = Base58.decode(invalidSignature58);
for (ByteArray invalidSignature : invalidBlockSignatures.keySet()) {
for (byte[] signature : blockSignatures) {
if (Arrays.equals(signature, invalidSignature)) {
if (Arrays.equals(signature, invalidSignature.value)) {
return true;
}
}
@@ -1579,12 +1577,23 @@ public class Synchronizer extends Thread {
Message getBlockMessage = new GetBlockMessage(signature);
Message message = peer.getResponse(getBlockMessage);
if (message == null || message.getType() != MessageType.BLOCK)
if (message == null)
return null;
BlockMessage blockMessage = (BlockMessage) message;
switch (message.getType()) {
case BLOCK: {
BlockMessage blockMessage = (BlockMessage) message;
return new Block(repository, blockMessage.getBlockData(), blockMessage.getTransactions(), blockMessage.getAtStates());
}
return new Block(repository, blockMessage.getBlockData(), blockMessage.getTransactions(), blockMessage.getAtStates());
case BLOCK_V2: {
BlockV2Message blockMessage = (BlockV2Message) message;
return new Block(repository, blockMessage.getBlockData(), blockMessage.getTransactions(), blockMessage.getAtStatesHash());
}
default:
return null;
}
}
public void populateBlockSummariesMinterLevels(Repository repository, List<BlockSummaryData> blockSummaries) throws DataException {

View File

@@ -67,6 +67,9 @@ public class ArbitraryDataFileListManager {
/** Maximum number of hops that a file list relay request is allowed to make */
public static int RELAY_REQUEST_MAX_HOPS = 4;
/** Minimum peer version to use relay */
public static String RELAY_MIN_PEER_VERSION = "3.4.0";
private ArbitraryDataFileListManager() {
}
@@ -524,6 +527,7 @@ public class ArbitraryDataFileListManager {
forwardArbitraryDataFileListMessage = new ArbitraryDataFileListMessage(signature, hashes, requestTime, requestHops,
arbitraryDataFileListMessage.getPeerAddress(), arbitraryDataFileListMessage.isRelayPossible());
}
forwardArbitraryDataFileListMessage.setId(message.getId());
// Forward to requesting peer
LOGGER.debug("Forwarding file list with {} hashes to requesting peer: {}", hashes.size(), requestingPeer);
@@ -690,12 +694,14 @@ public class ArbitraryDataFileListManager {
// Relay request hasn't reached the maximum number of hops yet, so can be rebroadcast
Message relayGetArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, hashes, requestTime, requestHops, requestingPeer);
relayGetArbitraryDataFileListMessage.setId(message.getId());
LOGGER.debug("Rebroadcasting hash list request from peer {} for signature {} to our other peers... totalRequestTime: {}, requestHops: {}", peer, Base58.encode(signature), totalRequestTime, requestHops);
Network.getInstance().broadcast(
broadcastPeer -> broadcastPeer == peer ||
Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost())
? null : relayGetArbitraryDataFileListMessage);
broadcastPeer ->
!broadcastPeer.isAtLeastVersion(RELAY_MIN_PEER_VERSION) ? null :
broadcastPeer == peer || Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost()) ? null : relayGetArbitraryDataFileListMessage
);
}
else {

View File

@@ -22,8 +22,7 @@ import org.qortal.utils.Triple;
import java.io.IOException;
import java.util.*;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_DURATION;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_HOPS;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.*;
public class ArbitraryMetadataManager {
@@ -339,6 +338,7 @@ public class ArbitraryMetadataManager {
if (requestingPeer != null) {
ArbitraryMetadataMessage forwardArbitraryMetadataMessage = new ArbitraryMetadataMessage(signature, arbitraryMetadataMessage.getArbitraryMetadataFile());
forwardArbitraryMetadataMessage.setId(arbitraryMetadataMessage.getId());
// Forward to requesting peer
LOGGER.debug("Forwarding metadata to requesting peer: {}", requestingPeer);
@@ -434,12 +434,13 @@ public class ArbitraryMetadataManager {
// Relay request hasn't reached the maximum number of hops yet, so can be rebroadcast
Message relayGetArbitraryMetadataMessage = new GetArbitraryMetadataMessage(signature, requestTime, requestHops);
relayGetArbitraryMetadataMessage.setId(message.getId());
LOGGER.debug("Rebroadcasting metadata request from peer {} for signature {} to our other peers... totalRequestTime: {}, requestHops: {}", peer, Base58.encode(signature), totalRequestTime, requestHops);
Network.getInstance().broadcast(
broadcastPeer -> broadcastPeer == peer ||
Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost())
? null : relayGetArbitraryMetadataMessage);
broadcastPeer ->
!broadcastPeer.isAtLeastVersion(RELAY_MIN_PEER_VERSION) ? null :
broadcastPeer == peer || Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost()) ? null : relayGetArbitraryMetadataMessage);
}
else {

View File

@@ -242,8 +242,8 @@ public class TradeBot implements Listener {
if (!(event instanceof Synchronizer.NewChainTipEvent))
return;
// Don't process trade bots or broadcast presence timestamps if our chain is more than 30 minutes old
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
// Don't process trade bots or broadcast presence timestamps if our chain is more than 60 minutes old
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
return;
@@ -292,7 +292,7 @@ public class TradeBot implements Listener {
}
public static byte[] deriveTradeNativePublicKey(byte[] privateKey) {
return PrivateKeyAccount.toPublicKey(privateKey);
return Crypto.toPublicKey(privateKey);
}
public static byte[] deriveTradeForeignPublicKey(byte[] privateKey) {

View File

@@ -29,6 +29,7 @@ import org.bitcoinj.wallet.SendRequest;
import org.bitcoinj.wallet.Wallet;
import org.qortal.api.model.SimpleForeignTransaction;
import org.qortal.crypto.Crypto;
import org.qortal.settings.Settings;
import org.qortal.utils.Amounts;
import org.qortal.utils.BitTwiddling;
@@ -61,11 +62,6 @@ public abstract class Bitcoiny implements ForeignBlockchain {
/** How many wallet keys to generate in each batch. */
private static final int WALLET_KEY_LOOKAHEAD_INCREMENT = 3;
/** How many wallet keys to generate when using bitcoinj as the data provider.
* We must use a higher value here since we are unable to request multiple batches of keys.
* Without this, the bitcoinj state can be missing transactions, causing errors such as "insufficient balance". */
private static final int WALLET_KEY_LOOKAHEAD_INCREMENT_BITCOINJ = 50;
/** Byte offset into raw block headers to block timestamp. */
private static final int TIMESTAMP_OFFSET = 4 + 32 + 32;
@@ -375,7 +371,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
public Long getWalletBalanceFromTransactions(String key58) throws ForeignBlockchainException {
long balance = 0;
Comparator<SimpleTransaction> oldestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp);
Comparator<SimpleTransaction> oldestTimestampFirstComparator = Comparator.comparingLong(SimpleTransaction::getTimestamp);
List<SimpleTransaction> transactions = getWalletTransactions(key58).stream().sorted(oldestTimestampFirstComparator).collect(Collectors.toList());
for (SimpleTransaction transaction : transactions) {
balance += transaction.getTotalAmount();
@@ -409,9 +405,6 @@ public abstract class Bitcoiny implements ForeignBlockchain {
Set<BitcoinyTransaction> walletTransactions = new HashSet<>();
Set<String> keySet = new HashSet<>();
// Set the number of consecutive empty batches required before giving up
final int numberOfAdditionalBatchesToSearch = 7;
int unusedCounter = 0;
int ki = 0;
do {
@@ -438,12 +431,12 @@ public abstract class Bitcoiny implements ForeignBlockchain {
if (areAllKeysUnused) {
// No transactions
if (unusedCounter >= numberOfAdditionalBatchesToSearch) {
if (unusedCounter >= Settings.getInstance().getGapLimit()) {
// ... and we've hit our search limit
break;
}
// We haven't hit our search limit yet so increment the counter and keep looking
unusedCounter++;
unusedCounter += WALLET_KEY_LOOKAHEAD_INCREMENT;
} else {
// Some keys in this batch were used, so reset the counter
unusedCounter = 0;
@@ -455,7 +448,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// Process new keys
} while (true);
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingInt(SimpleTransaction::getTimestamp).reversed();
Comparator<SimpleTransaction> newestTimestampFirstComparator = Comparator.comparingLong(SimpleTransaction::getTimestamp).reversed();
// Update cache and return
transactionsCacheTimestamp = NTP.getTime();
@@ -537,7 +530,8 @@ public abstract class Bitcoiny implements ForeignBlockchain {
// All inputs and outputs relate to this wallet, so the balance should be unaffected
amount = 0;
}
return new SimpleTransaction(t.txHash, t.timestamp, amount, fee, inputs, outputs);
long timestampMillis = t.timestamp * 1000L;
return new SimpleTransaction(t.txHash, timestampMillis, amount, fee, inputs, outputs);
}
/**
@@ -629,7 +623,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
this.keyChain = this.wallet.getActiveKeyChain();
// Set up wallet's key chain
this.keyChain.setLookaheadSize(Bitcoiny.WALLET_KEY_LOOKAHEAD_INCREMENT_BITCOINJ);
this.keyChain.setLookaheadSize(Settings.getInstance().getBitcoinjLookaheadSize());
this.keyChain.maybeLookAhead();
}

View File

@@ -7,7 +7,7 @@ import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class SimpleTransaction {
private String txHash;
private Integer timestamp;
private Long timestamp;
private long totalAmount;
private long feeAmount;
private List<Input> inputs;
@@ -74,7 +74,7 @@ public class SimpleTransaction {
public SimpleTransaction() {
}
public SimpleTransaction(String txHash, Integer timestamp, long totalAmount, long feeAmount, List<Input> inputs, List<Output> outputs) {
public SimpleTransaction(String txHash, Long timestamp, long totalAmount, long feeAmount, List<Input> inputs, List<Output> outputs) {
this.txHash = txHash;
this.timestamp = timestamp;
this.totalAmount = totalAmount;
@@ -87,7 +87,7 @@ public class SimpleTransaction {
return txHash;
}
public Integer getTimestamp() {
public Long getTimestamp() {
return timestamp;
}

View File

@@ -1,99 +0,0 @@
package org.qortal.crypto;
import java.lang.reflect.Constructor;
import java.lang.reflect.Field;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.Arrays;
import org.bouncycastle.crypto.Digest;
import org.bouncycastle.math.ec.rfc7748.X25519;
import org.bouncycastle.math.ec.rfc7748.X25519Field;
import org.bouncycastle.math.ec.rfc8032.Ed25519;
/** Additions to BouncyCastle providing Ed25519 to X25519 key conversion. */
public class BouncyCastle25519 {
private static final Class<?> pointAffineClass;
private static final Constructor<?> pointAffineCtor;
private static final Method decodePointVarMethod;
private static final Field yField;
static {
try {
Class<?> ed25519Class = Ed25519.class;
pointAffineClass = Arrays.stream(ed25519Class.getDeclaredClasses()).filter(clazz -> clazz.getSimpleName().equals("PointAffine")).findFirst().get();
if (pointAffineClass == null)
throw new ClassNotFoundException("Can't locate PointExt inner class inside Ed25519");
decodePointVarMethod = ed25519Class.getDeclaredMethod("decodePointVar", byte[].class, int.class, boolean.class, pointAffineClass);
decodePointVarMethod.setAccessible(true);
pointAffineCtor = pointAffineClass.getDeclaredConstructors()[0];
pointAffineCtor.setAccessible(true);
yField = pointAffineClass.getDeclaredField("y");
yField.setAccessible(true);
} catch (NoSuchMethodException | SecurityException | IllegalArgumentException | NoSuchFieldException | ClassNotFoundException e) {
throw new RuntimeException("Can't initialize BouncyCastle25519 shim", e);
}
}
private static int[] obtainYFromPublicKey(byte[] ed25519PublicKey) {
try {
Object pA = pointAffineCtor.newInstance();
Boolean result = (Boolean) decodePointVarMethod.invoke(null, ed25519PublicKey, 0, true, pA);
if (result == null || !result)
return null;
return (int[]) yField.get(pA);
} catch (SecurityException | InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
throw new RuntimeException("Can't reflect into BouncyCastle", e);
}
}
public static byte[] toX25519PublicKey(byte[] ed25519PublicKey) {
int[] one = new int[X25519Field.SIZE];
X25519Field.one(one);
int[] y = obtainYFromPublicKey(ed25519PublicKey);
int[] oneMinusY = new int[X25519Field.SIZE];
X25519Field.sub(one, y, oneMinusY);
int[] onePlusY = new int[X25519Field.SIZE];
X25519Field.add(one, y, onePlusY);
int[] oneMinusYInverted = new int[X25519Field.SIZE];
X25519Field.inv(oneMinusY, oneMinusYInverted);
int[] u = new int[X25519Field.SIZE];
X25519Field.mul(onePlusY, oneMinusYInverted, u);
X25519Field.normalize(u);
byte[] x25519PublicKey = new byte[X25519.SCALAR_SIZE];
X25519Field.encode(u, x25519PublicKey, 0);
return x25519PublicKey;
}
public static byte[] toX25519PrivateKey(byte[] ed25519PrivateKey) {
Digest d = Ed25519.createPrehash();
byte[] h = new byte[d.getDigestSize()];
d.update(ed25519PrivateKey, 0, ed25519PrivateKey.length);
d.doFinal(h, 0);
byte[] s = new byte[X25519.SCALAR_SIZE];
System.arraycopy(h, 0, s, 0, X25519.SCALAR_SIZE);
s[0] &= 0xF8;
s[X25519.SCALAR_SIZE - 1] &= 0x7F;
s[X25519.SCALAR_SIZE - 1] |= 0x40;
return s;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -253,6 +253,10 @@ public abstract class Crypto {
return false;
}
public static byte[] toPublicKey(byte[] privateKey) {
return new Ed25519PrivateKeyParameters(privateKey, 0).generatePublicKey().getEncoded();
}
public static boolean verify(byte[] publicKey, byte[] signature, byte[] message) {
try {
return Ed25519.verify(signature, 0, publicKey, 0, message, 0, message.length);
@@ -264,16 +268,24 @@ public abstract class Crypto {
public static byte[] sign(Ed25519PrivateKeyParameters edPrivateKeyParams, byte[] message) {
byte[] signature = new byte[SIGNATURE_LENGTH];
edPrivateKeyParams.sign(Ed25519.Algorithm.Ed25519, edPrivateKeyParams.generatePublicKey(), null, message, 0, message.length, signature, 0);
edPrivateKeyParams.sign(Ed25519.Algorithm.Ed25519,null, message, 0, message.length, signature, 0);
return signature;
}
public static byte[] sign(byte[] privateKey, byte[] message) {
byte[] signature = new byte[SIGNATURE_LENGTH];
new Ed25519PrivateKeyParameters(privateKey, 0).sign(Ed25519.Algorithm.Ed25519,null, message, 0, message.length, signature, 0);
return signature;
}
public static byte[] getSharedSecret(byte[] privateKey, byte[] publicKey) {
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(privateKey);
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(privateKey);
X25519PrivateKeyParameters xPrivateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
byte[] x25519PublicKey = BouncyCastle25519.toX25519PublicKey(publicKey);
byte[] x25519PublicKey = Qortal25519Extras.toX25519PublicKey(publicKey);
X25519PublicKeyParameters xPublicKeyParams = new X25519PublicKeyParameters(x25519PublicKey, 0);
byte[] sharedSecret = new byte[SHARED_SECRET_LENGTH];
@@ -281,5 +293,4 @@ public abstract class Crypto {
return sharedSecret;
}
}

View File

@@ -0,0 +1,234 @@
package org.qortal.crypto;
import org.bouncycastle.crypto.Digest;
import org.bouncycastle.crypto.digests.SHA512Digest;
import org.bouncycastle.math.ec.rfc7748.X25519;
import org.bouncycastle.math.ec.rfc7748.X25519Field;
import org.bouncycastle.math.ec.rfc8032.Ed25519;
import org.bouncycastle.math.raw.Nat256;
import java.security.SecureRandom;
import java.util.Arrays;
import java.util.Collection;
/**
* Additions to BouncyCastle providing:
* <p></p>
* <ul>
* <li>Ed25519 to X25519 key conversion</li>
* <li>Aggregate public keys</li>
* <li>Aggregate signatures</li>
* </ul>
*/
public abstract class Qortal25519Extras extends BouncyCastleEd25519 {
private static final SecureRandom SECURE_RANDOM = new SecureRandom();
public static byte[] toX25519PublicKey(byte[] ed25519PublicKey) {
int[] one = new int[X25519Field.SIZE];
X25519Field.one(one);
PointAffine pA = new PointAffine();
if (!decodePointVar(ed25519PublicKey, 0, true, pA))
return null;
int[] y = pA.y;
int[] oneMinusY = new int[X25519Field.SIZE];
X25519Field.sub(one, y, oneMinusY);
int[] onePlusY = new int[X25519Field.SIZE];
X25519Field.add(one, y, onePlusY);
int[] oneMinusYInverted = new int[X25519Field.SIZE];
X25519Field.inv(oneMinusY, oneMinusYInverted);
int[] u = new int[X25519Field.SIZE];
X25519Field.mul(onePlusY, oneMinusYInverted, u);
X25519Field.normalize(u);
byte[] x25519PublicKey = new byte[X25519.SCALAR_SIZE];
X25519Field.encode(u, x25519PublicKey, 0);
return x25519PublicKey;
}
public static byte[] toX25519PrivateKey(byte[] ed25519PrivateKey) {
Digest d = Ed25519.createPrehash();
byte[] h = new byte[d.getDigestSize()];
d.update(ed25519PrivateKey, 0, ed25519PrivateKey.length);
d.doFinal(h, 0);
byte[] s = new byte[X25519.SCALAR_SIZE];
System.arraycopy(h, 0, s, 0, X25519.SCALAR_SIZE);
s[0] &= 0xF8;
s[X25519.SCALAR_SIZE - 1] &= 0x7F;
s[X25519.SCALAR_SIZE - 1] |= 0x40;
return s;
}
// Mostly for test support
public static PointAccum newPointAccum() {
return new PointAccum();
}
public static byte[] aggregatePublicKeys(Collection<byte[]> publicKeys) {
PointAccum rAccum = null;
for (byte[] publicKey : publicKeys) {
PointAffine pA = new PointAffine();
if (!decodePointVar(publicKey, 0, false, pA))
// Failed to decode
return null;
if (rAccum == null) {
rAccum = new PointAccum();
pointCopy(pA, rAccum);
} else {
pointAdd(pointCopy(pA), rAccum);
}
}
byte[] publicKey = new byte[SCALAR_BYTES];
if (0 == encodePoint(rAccum, publicKey, 0))
// Failed to encode
return null;
return publicKey;
}
public static byte[] aggregateSignatures(Collection<byte[]> signatures) {
// Signatures are (R, s)
// R is a point
// s is a scalar
PointAccum rAccum = null;
int[] sAccum = new int[SCALAR_INTS];
byte[] rEncoded = new byte[POINT_BYTES];
int[] sPart = new int[SCALAR_INTS];
for (byte[] signature : signatures) {
System.arraycopy(signature,0, rEncoded, 0, rEncoded.length);
PointAffine pA = new PointAffine();
if (!decodePointVar(rEncoded, 0, false, pA))
// Failed to decode
return null;
if (rAccum == null) {
rAccum = new PointAccum();
pointCopy(pA, rAccum);
decode32(signature, rEncoded.length, sAccum, 0, SCALAR_INTS);
} else {
pointAdd(pointCopy(pA), rAccum);
decode32(signature, rEncoded.length, sPart, 0, SCALAR_INTS);
Nat256.addTo(sPart, sAccum);
// "mod L" on sAccum
if (Nat256.gte(sAccum, L))
Nat256.subFrom(L, sAccum);
}
}
byte[] signature = new byte[SIGNATURE_SIZE];
if (0 == encodePoint(rAccum, signature, 0))
// Failed to encode
return null;
for (int i = 0; i < sAccum.length; ++i) {
encode32(sAccum[i], signature, POINT_BYTES + i * 4);
}
return signature;
}
public static byte[] signForAggregation(byte[] privateKey, byte[] message) {
// Very similar to BouncyCastle's implementation except we use secure random nonce and different hash
Digest d = new SHA512Digest();
byte[] h = new byte[d.getDigestSize()];
d.reset();
d.update(privateKey, 0, privateKey.length);
d.doFinal(h, 0);
byte[] sH = new byte[SCALAR_BYTES];
pruneScalar(h, 0, sH);
byte[] publicKey = new byte[SCALAR_BYTES];
scalarMultBaseEncoded(sH, publicKey, 0);
byte[] rSeed = new byte[d.getDigestSize()];
SECURE_RANDOM.nextBytes(rSeed);
byte[] r = new byte[SCALAR_BYTES];
pruneScalar(rSeed, 0, r);
byte[] R = new byte[POINT_BYTES];
scalarMultBaseEncoded(r, R, 0);
d.reset();
d.update(message, 0, message.length);
d.doFinal(h, 0);
byte[] k = reduceScalar(h);
byte[] s = calculateS(r, k, sH);
byte[] signature = new byte[SIGNATURE_SIZE];
System.arraycopy(R, 0, signature, 0, POINT_BYTES);
System.arraycopy(s, 0, signature, POINT_BYTES, SCALAR_BYTES);
return signature;
}
public static boolean verifyAggregated(byte[] publicKey, byte[] signature, byte[] message) {
byte[] R = Arrays.copyOfRange(signature, 0, POINT_BYTES);
byte[] s = Arrays.copyOfRange(signature, POINT_BYTES, POINT_BYTES + SCALAR_BYTES);
if (!checkPointVar(R))
// R out of bounds
return false;
if (!checkScalarVar(s))
// s out of bounds
return false;
byte[] S = new byte[POINT_BYTES];
scalarMultBaseEncoded(s, S, 0);
PointAffine pA = new PointAffine();
if (!decodePointVar(publicKey, 0, true, pA))
// Failed to decode
return false;
Digest d = new SHA512Digest();
byte[] h = new byte[d.getDigestSize()];
d.update(message, 0, message.length);
d.doFinal(h, 0);
byte[] k = reduceScalar(h);
int[] nS = new int[SCALAR_INTS];
decodeScalar(s, 0, nS);
int[] nA = new int[SCALAR_INTS];
decodeScalar(k, 0, nA);
/*PointAccum*/
PointAccum pR = new PointAccum();
scalarMultStrausVar(nS, nA, pA, pR);
byte[] check = new byte[POINT_BYTES];
if (0 == encodePoint(pR, check, 0))
// Failed to encode
return false;
return Arrays.equals(check, R);
}
}

View File

@@ -5,6 +5,7 @@ import java.util.Arrays;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlTransient;
import org.qortal.account.PublicKeyAccount;
@@ -16,6 +17,9 @@ public class OnlineAccountData {
protected byte[] signature;
protected byte[] publicKey;
@XmlTransient
private int hash;
// Constructors
// necessary for JAXB serialization
@@ -62,20 +66,23 @@ public class OnlineAccountData {
if (otherOnlineAccountData.timestamp != this.timestamp)
return false;
// Signature more likely to be unique than public key
if (!Arrays.equals(otherOnlineAccountData.signature, this.signature))
return false;
if (!Arrays.equals(otherOnlineAccountData.publicKey, this.publicKey))
return false;
// We don't compare signature because it's not our remit to verify and newer aggregate signatures use random nonces
return true;
}
@Override
public int hashCode() {
// Pretty lazy implementation
return (int) this.timestamp;
int h = this.hash;
if (h == 0) {
this.hash = h = Long.hashCode(this.timestamp)
^ Arrays.hashCode(this.publicKey);
// We don't use signature because newer aggregate signatures use random nonces
}
return h;
}
}

View File

@@ -469,6 +469,8 @@ public class Network {
class NetworkProcessor extends ExecuteProduceConsume {
private final Logger LOGGER = LogManager.getLogger(NetworkProcessor.class);
private final AtomicLong nextConnectTaskTimestamp = new AtomicLong(0L); // ms - try first connect once NTP syncs
private final AtomicLong nextBroadcastTimestamp = new AtomicLong(0L); // ms - try first broadcast once NTP syncs
@@ -1373,17 +1375,26 @@ public class Network {
// We attempted to connect within the last day
// but we last managed to connect over a week ago.
Predicate<PeerData> isNotOldPeer = peerData -> {
if (peerData.getLastAttempted() == null
|| peerData.getLastAttempted() < now - OLD_PEER_ATTEMPTED_PERIOD) {
// First check if there was a connection attempt within the last day
if (peerData.getLastAttempted() != null
&& peerData.getLastAttempted() > now - OLD_PEER_ATTEMPTED_PERIOD) {
// There was, so now check if we had a successful connection in the last 7 days
if (peerData.getLastConnected() != null
&& peerData.getLastConnected() > now - OLD_PEER_CONNECTION_PERIOD) {
// We did, so this is NOT an 'old' peer
return true;
}
// Last successful connection was more than 1 week ago - this is an 'old' peer
return false;
}
else {
// Best to wait until we have a connection attempt - assume not an 'old' peer until then
return true;
}
if (peerData.getLastConnected() == null
|| peerData.getLastConnected() > now - OLD_PEER_CONNECTION_PERIOD) {
return true;
}
return false;
};
// Disregard peers that are NOT 'old'

View File

@@ -12,6 +12,7 @@ import org.qortal.data.network.PeerData;
import org.qortal.network.message.ChallengeMessage;
import org.qortal.network.message.Message;
import org.qortal.network.message.MessageException;
import org.qortal.network.message.MessageType;
import org.qortal.network.task.MessageTask;
import org.qortal.network.task.PingTask;
import org.qortal.settings.Settings;
@@ -546,6 +547,10 @@ public class Peer {
// adjusting position accordingly, reset limit to capacity
this.byteBuffer.compact();
// Unsupported message type? Discard with no further processing
if (message.getType() == MessageType.UNSUPPORTED)
continue;
BlockingQueue<Message> queue = this.replyQueues.get(message.getId());
if (queue != null) {
// Adding message to queue will unblock thread waiting for response

View File

@@ -9,6 +9,7 @@ import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformation;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Triple;
@@ -46,12 +47,12 @@ public class BlockMessage extends Message {
try {
int height = byteBuffer.getInt();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = BlockTransformer.fromByteBuffer(byteBuffer);
BlockTransformation blockTransformation = BlockTransformer.fromByteBuffer(byteBuffer);
BlockData blockData = blockInfo.getA();
BlockData blockData = blockTransformation.getBlockData();
blockData.setHeight(height);
return new BlockMessage(id, blockData, blockInfo.getB(), blockInfo.getC());
return new BlockMessage(id, blockData, blockTransformation.getTransactions(), blockTransformation.getAtStates());
} catch (TransformationException e) {
LOGGER.info(String.format("Received garbled BLOCK message: %s", e.getMessage()));
throw new MessageException(e.getMessage(), e);

View File

@@ -0,0 +1,87 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.Block;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformation;
import org.qortal.transform.block.BlockTransformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.List;
public class BlockV2Message extends Message {
private static final Logger LOGGER = LogManager.getLogger(BlockV2Message.class);
public static final long MIN_PEER_VERSION = 0x300030003L; // 3.3.3
private BlockData blockData;
private List<TransactionData> transactions;
private byte[] atStatesHash;
public BlockV2Message(Block block) throws TransformationException {
super(MessageType.BLOCK_V2);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
try {
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytesV2(block));
} catch (IOException e) {
throw new AssertionError("IOException shouldn't occur with ByteArrayOutputStream");
}
this.dataBytes = bytes.toByteArray();
this.checksumBytes = Message.generateChecksum(this.dataBytes);
}
public BlockV2Message(byte[] cachedBytes) {
super(MessageType.BLOCK_V2);
this.dataBytes = cachedBytes;
this.checksumBytes = Message.generateChecksum(this.dataBytes);
}
private BlockV2Message(int id, BlockData blockData, List<TransactionData> transactions, byte[] atStatesHash) {
super(id, MessageType.BLOCK_V2);
this.blockData = blockData;
this.transactions = transactions;
this.atStatesHash = atStatesHash;
}
public BlockData getBlockData() {
return this.blockData;
}
public List<TransactionData> getTransactions() {
return this.transactions;
}
public byte[] getAtStatesHash() {
return this.atStatesHash;
}
public static Message fromByteBuffer(int id, ByteBuffer byteBuffer) throws MessageException {
try {
int height = byteBuffer.getInt();
BlockTransformation blockTransformation = BlockTransformer.fromByteBufferV2(byteBuffer);
BlockData blockData = blockTransformation.getBlockData();
blockData.setHeight(height);
return new BlockV2Message(id, blockData, blockTransformation.getTransactions(), blockTransformation.getAtStatesHash());
} catch (TransformationException e) {
LOGGER.info(String.format("Received garbled BLOCK_V2 message: %s", e.getMessage()));
throw new MessageException(e.getMessage(), e);
}
}
}

View File

@@ -0,0 +1,112 @@
package org.qortal.network.message;
import com.google.common.primitives.Longs;
import org.qortal.transform.Transformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.util.*;
/**
* For requesting online accounts info from remote peer, given our list of online accounts.
* <p></p>
* Different format to V1 and V2:<br>
* <ul>
* <li>V1 is: number of entries, then timestamp + pubkey for each entry</li>
* <li>V2 is: groups of: number of entries, timestamp, then pubkey for each entry</li>
* <li>V3 is: groups of: timestamp, number of entries (one per leading byte), then hash(pubkeys) for each entry</li>
* </ul>
* <p></p>
* End
*/
public class GetOnlineAccountsV3Message extends Message {
private static final Map<Long, Map<Byte, byte[]>> EMPTY_ONLINE_ACCOUNTS = Collections.emptyMap();
private Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte;
public GetOnlineAccountsV3Message(Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte) {
super(MessageType.GET_ONLINE_ACCOUNTS_V3);
// If we don't have ANY online accounts then it's an easier construction...
if (hashesByTimestampThenByte.isEmpty()) {
this.dataBytes = EMPTY_DATA_BYTES;
return;
}
// We should know exactly how many bytes to allocate now
int byteSize = hashesByTimestampThenByte.size() * (Transformer.TIMESTAMP_LENGTH + Transformer.BYTE_LENGTH);
byteSize += hashesByTimestampThenByte.values()
.stream()
.mapToInt(map -> map.size() * Transformer.PUBLIC_KEY_LENGTH)
.sum();
ByteArrayOutputStream bytes = new ByteArrayOutputStream(byteSize);
// Warning: no double-checking/fetching! We must be ConcurrentMap compatible.
// So no contains() then get() or multiple get()s on the same key/map.
try {
for (var outerMapEntry : hashesByTimestampThenByte.entrySet()) {
bytes.write(Longs.toByteArray(outerMapEntry.getKey()));
var innerMap = outerMapEntry.getValue();
// Number of entries: 1 - 256, where 256 is represented by 0
bytes.write(innerMap.size() & 0xFF);
for (byte[] hashBytes : innerMap.values()) {
bytes.write(hashBytes);
}
}
} catch (IOException e) {
throw new AssertionError("IOException shouldn't occur with ByteArrayOutputStream");
}
this.dataBytes = bytes.toByteArray();
this.checksumBytes = Message.generateChecksum(this.dataBytes);
}
private GetOnlineAccountsV3Message(int id, Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte) {
super(id, MessageType.GET_ONLINE_ACCOUNTS_V3);
this.hashesByTimestampThenByte = hashesByTimestampThenByte;
}
public Map<Long, Map<Byte, byte[]>> getHashesByTimestampThenByte() {
return this.hashesByTimestampThenByte;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) {
// 'empty' case
if (!bytes.hasRemaining()) {
return new GetOnlineAccountsV3Message(id, EMPTY_ONLINE_ACCOUNTS);
}
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = new HashMap<>();
while (bytes.hasRemaining()) {
long timestamp = bytes.getLong();
int hashCount = bytes.get();
if (hashCount <= 0)
// 256 is represented by 0.
// Also converts negative signed value (e.g. -1) to proper positive unsigned value (255)
hashCount += 256;
Map<Byte, byte[]> hashesByByte = new HashMap<>();
for (int i = 0; i < hashCount; ++i) {
byte[] publicKeyHash = new byte[Transformer.PUBLIC_KEY_LENGTH];
bytes.get(publicKeyHash);
hashesByByte.put(publicKeyHash[0], publicKeyHash);
}
hashesByTimestampThenByte.put(timestamp, hashesByByte);
}
return new GetOnlineAccountsV3Message(id, hashesByTimestampThenByte);
}
}

View File

@@ -46,6 +46,7 @@ public abstract class Message {
private static final int MAX_DATA_SIZE = 10 * 1024 * 1024; // 10MB
protected static final byte[] EMPTY_DATA_BYTES = new byte[0];
private static final ByteBuffer EMPTY_READ_ONLY_BYTE_BUFFER = ByteBuffer.wrap(EMPTY_DATA_BYTES).asReadOnlyBuffer();
protected int id;
protected final MessageType type;
@@ -103,8 +104,7 @@ public abstract class Message {
int typeValue = readOnlyBuffer.getInt();
MessageType messageType = MessageType.valueOf(typeValue);
if (messageType == null)
// Unrecognised message type
throw new MessageException(String.format("Received unknown message type [%d]", typeValue));
messageType = MessageType.UNSUPPORTED;
// Optional message ID
byte hasId = readOnlyBuffer.get();
@@ -127,7 +127,7 @@ public abstract class Message {
if (dataSize > 0 && dataSize + CHECKSUM_LENGTH > readOnlyBuffer.remaining())
return null;
ByteBuffer dataSlice = null;
ByteBuffer dataSlice = EMPTY_READ_ONLY_BYTE_BUFFER;
if (dataSize > 0) {
byte[] expectedChecksum = new byte[CHECKSUM_LENGTH];
readOnlyBuffer.get(expectedChecksum);

View File

@@ -8,6 +8,9 @@ import static java.util.Arrays.stream;
import static java.util.stream.Collectors.toMap;
public enum MessageType {
// Pseudo-message, not sent over the wire
UNSUPPORTED(-1, UnsupportedMessage::fromByteBuffer),
// Handshaking
HELLO(0, HelloMessage::fromByteBuffer),
GOODBYE(1, GoodbyeMessage::fromByteBuffer),
@@ -31,6 +34,7 @@ public enum MessageType {
BLOCK(50, BlockMessage::fromByteBuffer),
GET_BLOCK(51, GetBlockMessage::fromByteBuffer),
BLOCK_V2(52, BlockV2Message::fromByteBuffer),
SIGNATURES(60, SignaturesMessage::fromByteBuffer),
GET_SIGNATURES_V2(61, GetSignaturesV2Message::fromByteBuffer),
@@ -42,6 +46,8 @@ public enum MessageType {
GET_ONLINE_ACCOUNTS(81, GetOnlineAccountsMessage::fromByteBuffer),
ONLINE_ACCOUNTS_V2(82, OnlineAccountsV2Message::fromByteBuffer),
GET_ONLINE_ACCOUNTS_V2(83, GetOnlineAccountsV2Message::fromByteBuffer),
// ONLINE_ACCOUNTS_V3(84, OnlineAccountsV3Message::fromByteBuffer),
GET_ONLINE_ACCOUNTS_V3(85, GetOnlineAccountsV3Message::fromByteBuffer),
ARBITRARY_DATA(90, ArbitraryDataMessage::fromByteBuffer),
GET_ARBITRARY_DATA(91, GetArbitraryDataMessage::fromByteBuffer),

View File

@@ -0,0 +1,20 @@
package org.qortal.network.message;
import java.nio.ByteBuffer;
public class UnsupportedMessage extends Message {
public UnsupportedMessage() {
super(MessageType.UNSUPPORTED);
throw new UnsupportedOperationException("Unsupported message is unsupported!");
}
private UnsupportedMessage(int id) {
super(id, MessageType.UNSUPPORTED);
}
public static Message fromByteBuffer(int id, ByteBuffer byteBuffer) throws MessageException {
return new UnsupportedMessage(id);
}
}

View File

@@ -159,6 +159,9 @@ public interface AccountRepository {
/** Returns number of active reward-shares involving passed public key as the minting account only. */
public int countRewardShares(byte[] mintingAccountPublicKey) throws DataException;
/** Returns number of active self-shares involving passed public key as the minting account only. */
public int countSelfShares(byte[] mintingAccountPublicKey) throws DataException;
public List<RewardShareData> getRewardShares() throws DataException;
public List<RewardShareData> findRewardShares(List<String> mintingAccounts, List<String> recipientAccounts, List<String> involvedAddresses, Integer limit, Integer offset, Boolean reverse) throws DataException;

View File

@@ -9,6 +9,7 @@ import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformation;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Triple;
@@ -66,7 +67,7 @@ public class BlockArchiveReader {
this.fileListCache = Map.copyOf(map);
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockAtHeight(int height) {
public BlockTransformation fetchBlockAtHeight(int height) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -77,13 +78,13 @@ public class BlockArchiveReader {
}
ByteBuffer byteBuffer = ByteBuffer.wrap(serializedBytes);
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = null;
BlockTransformation blockInfo = null;
try {
blockInfo = BlockTransformer.fromByteBuffer(byteBuffer);
if (blockInfo != null && blockInfo.getA() != null) {
if (blockInfo != null && blockInfo.getBlockData() != null) {
// Block height is stored outside of the main serialized bytes, so it
// won't be set automatically.
blockInfo.getA().setHeight(height);
blockInfo.getBlockData().setHeight(height);
}
} catch (TransformationException e) {
return null;
@@ -91,8 +92,7 @@ public class BlockArchiveReader {
return blockInfo;
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockWithSignature(
byte[] signature, Repository repository) {
public BlockTransformation fetchBlockWithSignature(byte[] signature, Repository repository) {
if (this.fileListCache == null) {
this.fetchFileList();
@@ -105,13 +105,12 @@ public class BlockArchiveReader {
return null;
}
public List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> fetchBlocksFromRange(
int startHeight, int endHeight) {
public List<BlockTransformation> fetchBlocksFromRange(int startHeight, int endHeight) {
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList = new ArrayList<>();
List<BlockTransformation> blockInfoList = new ArrayList<>();
for (int height = startHeight; height <= endHeight; height++) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = this.fetchBlockAtHeight(height);
BlockTransformation blockInfo = this.fetchBlockAtHeight(height);
if (blockInfo == null) {
return blockInfoList;
}

View File

@@ -688,6 +688,17 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public int countSelfShares(byte[] minterPublicKey) throws DataException {
String sql = "SELECT COUNT(*) FROM RewardShares WHERE minter_public_key = ? AND minter = recipient";
try (ResultSet resultSet = this.repository.checkedExecute(sql, minterPublicKey)) {
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to count self-shares in repository", e);
}
}
@Override
public List<RewardShareData> getRewardShares() throws DataException {
String sql = "SELECT minter_public_key, minter, recipient, share_percent, reward_share_public_key FROM RewardShares";

View File

@@ -1,16 +1,13 @@
package org.qortal.repository.hsqldb;
import org.qortal.api.ApiError;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.BlockArchiveRepository;
import org.qortal.repository.DataException;
import org.qortal.utils.Triple;
import org.qortal.transform.block.BlockTransformation;
import java.sql.ResultSet;
import java.sql.SQLException;
@@ -29,11 +26,11 @@ public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
@Override
public BlockData fromSignature(byte[] signature) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockWithSignature(signature, this.repository);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
BlockTransformation blockInfo = BlockArchiveReader.getInstance().fetchBlockWithSignature(signature, this.repository);
if (blockInfo == null)
return null;
return blockInfo.getBlockData();
}
@Override
@@ -47,11 +44,11 @@ public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
@Override
public BlockData fromHeight(int height) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
BlockTransformation blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height);
if (blockInfo == null)
return null;
return blockInfo.getBlockData();
}
@Override
@@ -79,9 +76,9 @@ public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
int height = referenceBlock.getHeight();
if (height > 0) {
// Request the block at height + 1
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height + 1);
BlockTransformation blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height + 1);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
return blockInfo.getBlockData();
}
}
}

View File

@@ -964,6 +964,11 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("DROP TABLE ArbitraryPeers");
break;
case 42:
// We need more space for online accounts
stmt.execute("ALTER TABLE Blocks ALTER COLUMN online_accounts SET DATA TYPE VARBINARY(10240)");
break;
default:
// nothing to do
return false;

View File

@@ -23,7 +23,6 @@ import java.util.stream.Stream;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
@@ -1003,7 +1002,7 @@ public class HSQLDBRepository implements Repository {
if (privateKey == null)
return null;
return PrivateKeyAccount.toPublicKey(privateKey);
return Crypto.toPublicKey(privateKey);
}
public static String ed25519PublicKeyToAddress(byte[] publicKey) {

View File

@@ -203,7 +203,7 @@ public class Settings {
private int maxRetries = 2;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "3.1.0";
private String minPeerVersion = "3.3.7";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */
@@ -284,6 +284,16 @@ public class Settings {
private Long testNtpOffset = null;
/* Foreign chains */
/** The number of consecutive empty addresses required before treating a wallet's transaction set as complete */
private int gapLimit = 24;
/** How many wallet keys to generate when using bitcoinj as the blockchain interface (e.g. when sending coins) */
private int bitcoinjLookaheadSize = 50;
// Data storage (QDN)
/** Data storage enabled/disabled*/
@@ -872,6 +882,15 @@ public class Settings {
}
public int getGapLimit() {
return this.gapLimit;
}
public int getBitcoinjLookaheadSize() {
return bitcoinjLookaheadSize;
}
public boolean isQdnEnabled() {
return this.qdnEnabled;
}

View File

@@ -6,6 +6,7 @@ import java.util.Objects;
import java.util.stream.Collectors;
import org.qortal.account.Account;
import org.qortal.block.BlockChain;
import org.qortal.controller.arbitrary.ArbitraryDataManager;
import org.qortal.controller.arbitrary.ArbitraryDataStorageManager;
import org.qortal.crypto.Crypto;
@@ -19,6 +20,7 @@ import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.ArbitraryTransactionTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.ArbitraryTransactionUtils;
@@ -86,6 +88,14 @@ public class ArbitraryTransaction extends Transaction {
@Override
public boolean hasValidReference() throws DataException {
// We shouldn't really get this far, but just in case:
// Disable reference checking after feature trigger timestamp
if (this.arbitraryTransactionData.getTimestamp() >= BlockChain.getInstance().getDisableReferenceTimestamp()) {
// Allow any value as long as it is the correct length
return this.arbitraryTransactionData.getReference() != null &&
this.arbitraryTransactionData.getReference().length == Transformer.SIGNATURE_LENGTH;
}
if (this.arbitraryTransactionData.getReference() == null) {
return false;
}

View File

@@ -5,6 +5,7 @@ import java.util.List;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.asset.AssetData;
import org.qortal.data.transaction.ATTransactionData;
@@ -12,6 +13,7 @@ import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.AtTransactionTransformer;
import org.qortal.utils.Amounts;
@@ -75,6 +77,13 @@ public class AtTransaction extends Transaction {
@Override
public boolean hasValidReference() throws DataException {
// Disable reference checking after feature trigger timestamp
if (this.atTransactionData.getTimestamp() >= BlockChain.getInstance().getDisableReferenceTimestamp()) {
// Allow any value as long as it is the correct length
return this.atTransactionData.getReference() != null &&
this.atTransactionData.getReference().length == Transformer.SIGNATURE_LENGTH;
}
// Check reference is correct, using AT account, not transaction creator which is null account
Account atAccount = getATAccount();
return Arrays.equals(atAccount.getLastReference(), atTransactionData.getReference());

View File

@@ -8,6 +8,7 @@ import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.crypto.MemoryPoW;
import org.qortal.data.PaymentData;
@@ -20,6 +21,7 @@ import org.qortal.repository.DataException;
import org.qortal.repository.GroupRepository;
import org.qortal.repository.Repository;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.ChatTransactionTransformer;
import org.qortal.transform.transaction.MessageTransactionTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
@@ -163,6 +165,14 @@ public class MessageTransaction extends Transaction {
@Override
public boolean hasValidReference() throws DataException {
// We shouldn't really get this far, but just in case:
// Disable reference checking after feature trigger timestamp
if (this.messageTransactionData.getTimestamp() >= BlockChain.getInstance().getDisableReferenceTimestamp()) {
// Allow any value as long as it is the correct length
return this.messageTransactionData.getReference() != null &&
this.messageTransactionData.getReference().length == Transformer.SIGNATURE_LENGTH;
}
if (this.messageTransactionData.getReference() == null)
return false;

View File

@@ -12,7 +12,6 @@ import java.util.function.Supplier;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.Account;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.ACCT;
@@ -49,7 +48,7 @@ public class PresenceTransaction extends Transaction {
REWARD_SHARE(0) {
@Override
public long getLifetime() {
return OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS;
return OnlineAccountsManager.getOnlineTimestampModulus();
}
},
TRADE_BOT(1) {

View File

@@ -140,8 +140,21 @@ public class RewardShareTransaction extends Transaction {
// Check the minting account hasn't reach maximum number of reward-shares
int rewardShareCount = this.repository.getAccountRepository().countRewardShares(creator.getPublicKey());
if (rewardShareCount >= BlockChain.getInstance().getMaxRewardSharesPerMintingAccount())
int selfShareCount = this.repository.getAccountRepository().countSelfShares(creator.getPublicKey());
int maxRewardShares = BlockChain.getInstance().getMaxRewardSharesAtTimestamp(this.rewardShareTransactionData.getTimestamp());
if (creator.isFounder())
// Founders have a different limit
maxRewardShares = BlockChain.getInstance().getMaxRewardSharesPerFounderMintingAccount();
if (rewardShareCount >= maxRewardShares)
return ValidationResult.MAXIMUM_REWARD_SHARES;
// When filling all reward share slots, one must be a self share (after feature trigger timestamp)
if (this.rewardShareTransactionData.getTimestamp() >= BlockChain.getInstance().getRewardShareLimitTimestamp())
if (!isRecipientAlsoMinter && rewardShareCount == maxRewardShares-1 && selfShareCount == 0)
return ValidationResult.MAXIMUM_REWARD_SHARES;
} else {
// This transaction intends to modify/terminate an existing reward-share

View File

@@ -31,6 +31,7 @@ import org.qortal.repository.GroupRepository;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.NTP;
@@ -905,6 +906,13 @@ public abstract class Transaction {
* @throws DataException
*/
public boolean hasValidReference() throws DataException {
// Disable reference checking after feature trigger timestamp
if (this.transactionData.getTimestamp() >= BlockChain.getInstance().getDisableReferenceTimestamp()) {
// Allow any value as long as it is the correct length
return this.transactionData.getReference() != null &&
this.transactionData.getReference().length == Transformer.SIGNATURE_LENGTH;
}
Account creator = getCreator();
return Arrays.equals(transactionData.getReference(), creator.getLastReference());

View File

@@ -0,0 +1,44 @@
package org.qortal.transform.block;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import java.util.List;
public class BlockTransformation {
private final BlockData blockData;
private final List<TransactionData> transactions;
private final List<ATStateData> atStates;
private final byte[] atStatesHash;
/*package*/ BlockTransformation(BlockData blockData, List<TransactionData> transactions, List<ATStateData> atStates) {
this.blockData = blockData;
this.transactions = transactions;
this.atStates = atStates;
this.atStatesHash = null;
}
/*package*/ BlockTransformation(BlockData blockData, List<TransactionData> transactions, byte[] atStatesHash) {
this.blockData = blockData;
this.transactions = transactions;
this.atStates = null;
this.atStatesHash = atStatesHash;
}
public BlockData getBlockData() {
return blockData;
}
public List<TransactionData> getTransactions() {
return transactions;
}
public List<ATStateData> getAtStates() {
return atStates;
}
public byte[] getAtStatesHash() {
return atStatesHash;
}
}

View File

@@ -3,12 +3,14 @@ package org.qortal.transform.block;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
@@ -20,7 +22,6 @@ import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.Serialization;
import org.qortal.utils.Triple;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
@@ -45,14 +46,13 @@ public class BlockTransformer extends Transformer {
protected static final int AT_BYTES_LENGTH = INT_LENGTH;
protected static final int AT_FEES_LENGTH = AMOUNT_LENGTH;
protected static final int AT_LENGTH = AT_FEES_LENGTH + AT_BYTES_LENGTH;
protected static final int ONLINE_ACCOUNTS_COUNT_LENGTH = INT_LENGTH;
protected static final int ONLINE_ACCOUNTS_SIZE_LENGTH = INT_LENGTH;
protected static final int ONLINE_ACCOUNTS_TIMESTAMP_LENGTH = TIMESTAMP_LENGTH;
protected static final int ONLINE_ACCOUNTS_SIGNATURES_COUNT_LENGTH = INT_LENGTH;
protected static final int AT_ENTRY_LENGTH = ADDRESS_LENGTH + SHA256_LENGTH + AMOUNT_LENGTH;
public static final int AT_ENTRY_LENGTH = ADDRESS_LENGTH + SHA256_LENGTH + AMOUNT_LENGTH;
/**
* Extract block data and transaction data from serialized bytes.
@@ -61,7 +61,7 @@ public class BlockTransformer extends Transformer {
* @return BlockData and a List of transactions.
* @throws TransformationException
*/
public static Triple<BlockData, List<TransactionData>, List<ATStateData>> fromBytes(byte[] bytes) throws TransformationException {
public static BlockTransformation fromBytes(byte[] bytes) throws TransformationException {
if (bytes == null)
return null;
@@ -76,28 +76,40 @@ public class BlockTransformer extends Transformer {
/**
* Extract block data and transaction data from serialized bytes containing a single block.
*
* @param bytes
* @param byteBuffer source of serialized block bytes
* @return BlockData and a List of transactions.
* @throws TransformationException
*/
public static Triple<BlockData, List<TransactionData>, List<ATStateData>> fromByteBuffer(ByteBuffer byteBuffer) throws TransformationException {
public static BlockTransformation fromByteBuffer(ByteBuffer byteBuffer) throws TransformationException {
return BlockTransformer.fromByteBuffer(byteBuffer, false);
}
/**
* Extract block data and transaction data from serialized bytes containing a single block.
*
* @param byteBuffer source of serialized block bytes
* @return BlockData and a List of transactions.
* @throws TransformationException
*/
public static BlockTransformation fromByteBufferV2(ByteBuffer byteBuffer) throws TransformationException {
return BlockTransformer.fromByteBuffer(byteBuffer, true);
}
/**
* Extract block data and transaction data from serialized bytes containing one or more blocks.
*
* @param bytes
* Extract block data and transaction data from serialized bytes containing a single block, in one of two forms.
*
* @param byteBuffer source of serialized block bytes
* @param isV2 set to true if AT state info is represented by a single hash, false if serialized as per-AT address+state hash+fees
* @return the next block's BlockData and a List of transactions.
* @throws TransformationException
*/
public static Triple<BlockData, List<TransactionData>, List<ATStateData>> fromByteBuffer(ByteBuffer byteBuffer, boolean finalBlockInBuffer) throws TransformationException {
private static BlockTransformation fromByteBuffer(ByteBuffer byteBuffer, boolean isV2) throws TransformationException {
int version = byteBuffer.getInt();
if (finalBlockInBuffer && byteBuffer.remaining() < BASE_LENGTH + AT_BYTES_LENGTH - VERSION_LENGTH)
if (byteBuffer.remaining() < BASE_LENGTH + AT_BYTES_LENGTH - VERSION_LENGTH)
throw new TransformationException("Byte data too short for Block");
if (finalBlockInBuffer && byteBuffer.remaining() > BlockChain.getInstance().getMaxBlockSize())
if (byteBuffer.remaining() > BlockChain.getInstance().getMaxBlockSize())
throw new TransformationException("Byte data too long for Block");
long timestamp = byteBuffer.getLong();
@@ -117,42 +129,52 @@ public class BlockTransformer extends Transformer {
int atCount = 0;
long atFees = 0;
List<ATStateData> atStates = new ArrayList<>();
byte[] atStatesHash = null;
List<ATStateData> atStates = null;
int atBytesLength = byteBuffer.getInt();
if (isV2) {
// Simply: AT count, AT total fees, hash(all AT states)
atCount = byteBuffer.getInt();
atFees = byteBuffer.getLong();
atStatesHash = new byte[Transformer.SHA256_LENGTH];
byteBuffer.get(atStatesHash);
} else {
// V1: AT info byte length, then per-AT entries of AT address + state hash + fees
int atBytesLength = byteBuffer.getInt();
if (atBytesLength > BlockChain.getInstance().getMaxBlockSize())
throw new TransformationException("Byte data too long for Block's AT info");
if (atBytesLength > BlockChain.getInstance().getMaxBlockSize())
throw new TransformationException("Byte data too long for Block's AT info");
// Read AT-address, SHA256 hash and fees
if (atBytesLength % AT_ENTRY_LENGTH != 0)
throw new TransformationException("AT byte data not a multiple of AT entry length");
ByteBuffer atByteBuffer = byteBuffer.slice();
atByteBuffer.limit(atBytesLength);
ByteBuffer atByteBuffer = byteBuffer.slice();
atByteBuffer.limit(atBytesLength);
// Read AT-address, SHA256 hash and fees
if (atBytesLength % AT_ENTRY_LENGTH != 0)
throw new TransformationException("AT byte data not a multiple of AT entry length");
atStates = new ArrayList<>();
while (atByteBuffer.hasRemaining()) {
byte[] atAddressBytes = new byte[ADDRESS_LENGTH];
atByteBuffer.get(atAddressBytes);
String atAddress = Base58.encode(atAddressBytes);
while (atByteBuffer.hasRemaining()) {
byte[] atAddressBytes = new byte[ADDRESS_LENGTH];
atByteBuffer.get(atAddressBytes);
String atAddress = Base58.encode(atAddressBytes);
byte[] stateHash = new byte[SHA256_LENGTH];
atByteBuffer.get(stateHash);
byte[] stateHash = new byte[SHA256_LENGTH];
atByteBuffer.get(stateHash);
long fees = atByteBuffer.getLong();
long fees = atByteBuffer.getLong();
// Add this AT's fees to our total
atFees += fees;
// Add this AT's fees to our total
atFees += fees;
atStates.add(new ATStateData(atAddress, stateHash, fees));
}
atStates.add(new ATStateData(atAddress, stateHash, fees));
// Bump byteBuffer over AT states just read in slice
byteBuffer.position(byteBuffer.position() + atBytesLength);
// AT count to reflect the number of states we have
atCount = atStates.size();
}
// Bump byteBuffer over AT states just read in slice
byteBuffer.position(byteBuffer.position() + atBytesLength);
// AT count to reflect the number of states we have
atCount = atStates.size();
// Add AT fees to totalFees
totalFees += atFees;
@@ -221,16 +243,15 @@ public class BlockTransformer extends Transformer {
byteBuffer.get(onlineAccountsSignatures);
}
// We should only complain about excess byte data if we aren't expecting more blocks in this ByteBuffer
if (finalBlockInBuffer && byteBuffer.hasRemaining())
throw new TransformationException("Excess byte data found after parsing Block");
// We don't have a height!
Integer height = null;
BlockData blockData = new BlockData(version, reference, transactionCount, totalFees, transactionsSignature, height, timestamp,
minterPublicKey, minterSignature, atCount, atFees, encodedOnlineAccounts, onlineAccountsCount, onlineAccountsTimestamp, onlineAccountsSignatures);
return new Triple<>(blockData, transactions, atStates);
if (isV2)
return new BlockTransformation(blockData, transactions, atStatesHash);
else
return new BlockTransformation(blockData, transactions, atStates);
}
public static int getDataLength(Block block) throws TransformationException {
@@ -266,6 +287,14 @@ public class BlockTransformer extends Transformer {
}
public static byte[] toBytes(Block block) throws TransformationException {
return toBytes(block, false);
}
public static byte[] toBytesV2(Block block) throws TransformationException {
return toBytes(block, true);
}
private static byte[] toBytes(Block block, boolean isV2) throws TransformationException {
BlockData blockData = block.getBlockData();
try {
@@ -279,16 +308,37 @@ public class BlockTransformer extends Transformer {
bytes.write(blockData.getMinterSignature());
int atBytesLength = blockData.getATCount() * AT_ENTRY_LENGTH;
bytes.write(Ints.toByteArray(atBytesLength));
if (isV2) {
ByteArrayOutputStream atHashBytes = new ByteArrayOutputStream(atBytesLength);
long atFees = 0;
for (ATStateData atStateData : block.getATStates()) {
// Skip initial states generated by DEPLOY_AT transactions in the same block
if (atStateData.isInitial())
continue;
for (ATStateData atStateData : block.getATStates()) {
// Skip initial states generated by DEPLOY_AT transactions in the same block
if (atStateData.isInitial())
continue;
bytes.write(Base58.decode(atStateData.getATAddress()));
bytes.write(atStateData.getStateHash());
bytes.write(Longs.toByteArray(atStateData.getFees()));
atHashBytes.write(atStateData.getATAddress().getBytes(StandardCharsets.UTF_8));
atHashBytes.write(atStateData.getStateHash());
atHashBytes.write(Longs.toByteArray(atStateData.getFees()));
atFees += atStateData.getFees();
}
bytes.write(Ints.toByteArray(blockData.getATCount()));
bytes.write(Longs.toByteArray(atFees));
bytes.write(Crypto.digest(atHashBytes.toByteArray()));
} else {
bytes.write(Ints.toByteArray(atBytesLength));
for (ATStateData atStateData : block.getATStates()) {
// Skip initial states generated by DEPLOY_AT transactions in the same block
if (atStateData.isInitial())
continue;
bytes.write(Base58.decode(atStateData.getATAddress()));
bytes.write(atStateData.getStateHash());
bytes.write(Longs.toByteArray(atStateData.getFees()));
}
}
// Transactions

View File

@@ -6,6 +6,7 @@ import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transform.block.BlockTransformation;
import java.util.List;
@@ -33,8 +34,7 @@ public class BlockArchiveUtils {
repository.discardChanges();
final int requestedRange = endHeight+1-startHeight;
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList =
BlockArchiveReader.getInstance().fetchBlocksFromRange(startHeight, endHeight);
List<BlockTransformation> blockInfoList = BlockArchiveReader.getInstance().fetchBlocksFromRange(startHeight, endHeight);
// Ensure that we have received all of the requested blocks
if (blockInfoList == null || blockInfoList.isEmpty()) {
@@ -43,27 +43,26 @@ public class BlockArchiveUtils {
if (blockInfoList.size() != requestedRange) {
throw new IllegalStateException("Non matching block count when importing from archive");
}
Triple<BlockData, List<TransactionData>, List<ATStateData>> firstBlock = blockInfoList.get(0);
if (firstBlock == null || firstBlock.getA().getHeight() != startHeight) {
BlockTransformation firstBlock = blockInfoList.get(0);
if (firstBlock == null || firstBlock.getBlockData().getHeight() != startHeight) {
throw new IllegalStateException("Non matching first block when importing from archive");
}
if (blockInfoList.size() > 0) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> lastBlock =
blockInfoList.get(blockInfoList.size() - 1);
if (lastBlock == null || lastBlock.getA().getHeight() != endHeight) {
BlockTransformation lastBlock = blockInfoList.get(blockInfoList.size() - 1);
if (lastBlock == null || lastBlock.getBlockData().getHeight() != endHeight) {
throw new IllegalStateException("Non matching last block when importing from archive");
}
}
// Everything seems okay, so go ahead with the import
for (Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo : blockInfoList) {
for (BlockTransformation blockInfo : blockInfoList) {
try {
// Save block
repository.getBlockRepository().save(blockInfo.getA());
repository.getBlockRepository().save(blockInfo.getBlockData());
// Save AT state data hashes
for (ATStateData atStateData : blockInfo.getC()) {
atStateData.setHeight(blockInfo.getA().getHeight());
for (ATStateData atStateData : blockInfo.getAtStates()) {
atStateData.setHeight(blockInfo.getBlockData().getHeight());
repository.getATRepository().save(atStateData);
}

View File

@@ -15,10 +15,15 @@
"minAccountLevelToMint": 1,
"minAccountLevelForBlockSubmissions": 5,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 6,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 1657382400000, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 43200000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 1659801600000,
"rewardsByHeight": [
{ "height": 1, "reward": 5.00 },
{ "height": 259201, "reward": 4.75 },
@@ -35,14 +40,16 @@
{ "height": 3110401, "reward": 2.00 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 7200, 64800, 129600, 172800, 244000, 345600, 518400, 691200, 864000, 1036800 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -57,9 +64,12 @@
"atFindNextTransactionFix": 275000,
"newBlockSigHeight": 320000,
"shareBinFix": 399000,
"rewardShareLimitTimestamp": 1657382400000,
"calcChainWeightTimestamp": 1620579600000,
"transactionV5Timestamp": 1642176000000,
"transactionV6Timestamp": 9999999999999
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 1655222400000,
"aggregateSignatureTimestamp": 1656864000000
},
"genesisInfo": {
"version": 4,

View File

@@ -6,15 +6,15 @@
### Common ###
JSON = JSON Nachricht konnte nicht geparst werden
INSUFFICIENT_BALANCE = insufficient balance
INSUFFICIENT_BALANCE = Kein Ausgleich
UNAUTHORIZED = API-Aufruf nicht autorisiert
REPOSITORY_ISSUE = Repository-Fehler
NON_PRODUCTION = this API call is not permitted for production systems
NON_PRODUCTION = Dieser APi-Aufruf ist nicht gestattet für Produtkion
BLOCKCHAIN_NEEDS_SYNC = blockchain needs to synchronize first
BLOCKCHAIN_NEEDS_SYNC = Blockchain muss sich erst verbinden
NO_TIME_SYNC = noch keine Uhrensynchronisation
@@ -68,16 +68,16 @@ ORDER_UNKNOWN = unbekannte asset order ID
GROUP_UNKNOWN = Gruppe unbekannt
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = fremde Blockchain oder ElectrumX Netzwerk Problem
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = unzureichend Bilanz auf fremde blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
FOREIGN_BLOCKCHAIN_TOO_SOON = zu früh um fremde Blockchain-Transaktionen zu übertragen (Sperrzeit/mittlere Blockzeit)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
ORDER_SIZE_TOO_SMALL = Bestellmenge zu niedrig
### Data ###
FILE_NOT_FOUND = Datei nicht gefunden
NO_REPLY = peer did not reply with data
NO_REPLY = Peer hat nicht mit Daten verbinden

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ko",
### Common ###
JSON = JSON 메시지를 구문 분석하지 못했습니다.
INSUFFICIENT_BALANCE = 잔고 부족
UNAUTHORIZED = 승인되지 않은 API 호출
REPOSITORY_ISSUE = 리포지토리 오류
NON_PRODUCTION = 이 API 호출은 프로덕션 시스템에 허용되지 않습니다.
BLOCKCHAIN_NEEDS_SYNC = 블록체인이 먼저 동기화되어야 함
NO_TIME_SYNC = 아직 동기화가 없습니다.
### Validation ###
INVALID_SIGNATURE = 무효 서명
INVALID_ADDRESS = 잘못된 주소
INVALID_PUBLIC_KEY = 잘못된 공개 키
INVALID_DATA = 잘못된 데이터
INVALID_NETWORK_ADDRESS = 잘못된 네트워크 주소
ADDRESS_UNKNOWN = 계정 주소 알 수 없음
INVALID_CRITERIA = 잘못된 검색 기준
INVALID_REFERENCE = 무효 참조
TRANSFORMATION_ERROR = JSON을 트랜잭션으로 변환할 수 없습니다.
INVALID_PRIVATE_KEY = 잘못된 개인 키
INVALID_HEIGHT = 잘못된 블록 높이
CANNOT_MINT = 계정을 만들 수 없습니다.
### Blocks ###
BLOCK_UNKNOWN = 알 수 없는 블록
### Transactions ###
TRANSACTION_UNKNOWN = 알 수 없는 거래
PUBLIC_KEY_NOT_FOUND = 공개 키를 찾을 수 없음
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = 유효하지 않은 거래: %s (%s)
### Naming ###
NAME_UNKNOWN = 이름 미상
### Asset ###
INVALID_ASSET_ID = 잘못된 자산 ID
INVALID_ORDER_ID = 자산 주문 ID가 잘못되었습니다.
ORDER_UNKNOWN = 알 수 없는 자산 주문 ID
### Groups ###
GROUP_UNKNOWN = 알 수 없는 그룹
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = 외부 블록체인 또는 일렉트럼X 네트워크 문제
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = 외부 블록체인 잔액 부족
FOREIGN_BLOCKCHAIN_TOO_SOON = 외부 블록체인 트랜잭션을 브로드캐스트하기에는 너무 빠릅니다(LockTime/중앙 블록 시간).
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = 주문량이 너무 적다
### Data ###
FILE_NOT_FOUND = 파일을 찾을 수 없음
NO_REPLY = 피어가 허용된 시간 내에 응답하지 않음

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ro",
### Comun ###
JSON = nu s-a reusit analizarea mesajului JSON
INSUFFICIENT_BALANCE = fonduri insuficiente
UNAUTHORIZED = Solicitare API neautorizata
REPOSITORY_ISSUE = eroare a depozitarului
NON_PRODUCTION = aceasta solictare API nu este permisa pentru sistemele de productie
BLOCKCHAIN_NEEDS_SYNC = blockchain-ul trebuie sa se sincronizeze mai intai
NO_TIME_SYNC = nu exista inca o sincronizare a ceasului
### Validation ###
INVALID_SIGNATURE = semnatura invalida
INVALID_ADDRESS = adresa invalida
INVALID_PUBLIC_KEY = cheie publica invalid
INVALID_DATA = date invalida
INVALID_NETWORK_ADDRESS = invalid network address
ADDRESS_UNKNOWN = adresa contului necunoscuta
INVALID_CRITERIA = criteriu de cautare invalid
INVALID_REFERENCE = referinta invalida
TRANSFORMATION_ERROR = nu s-a putut transforma JSON in tranzactie
INVALID_PRIVATE_KEY = invalid private key
INVALID_HEIGHT = dimensiunea blocului invalida
CANNOT_MINT = contul nu poate produce moneda
### Blocks ###
BLOCK_UNKNOWN = bloc necunoscut
### Transactions ###
TRANSACTION_UNKNOWN = tranzactie necunoscuta
PUBLIC_KEY_NOT_FOUND = nu s-a gasit cheia publica
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = tranzactie invalida: %s (%s)
### Naming ###
NAME_UNKNOWN = nume necunoscut
### Asset ###
INVALID_ASSET_ID = ID active invalid
INVALID_ORDER_ID = ID-ul de comanda al activului invalid
ORDER_UNKNOWN = ID necunoscut al comenzii activului
### Groups ###
GROUP_UNKNOWN = grup necunoscut
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = problema de blockchain strain sau de retea ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = sold insuficient pe blockchain strain
FOREIGN_BLOCKCHAIN_TOO_SOON = prea devreme pentru a difuza o tranzactie blockchain straina (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = valoarea tranzactiei este prea mica
### Data ###
FILE_NOT_FOUND = nu s-a gasit fisierul
NO_REPLY = omologul nu a raspuns in termenul stabilit

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
APPLYING_UPDATE_AND_RESTARTING = 자동 업데이트를 적용하고 다시 시작하는 중...
AUTO_UPDATE = 자동 업데이트
BLOCK_HEIGHT = 높이
BUILD_VERSION = 빌드 버전
CHECK_TIME_ACCURACY = 시간 정확도 점검
CONNECTING = 연결하는
CONNECTION = 연결
CONNECTIONS = 연결
CREATING_BACKUP_OF_DB_FILES = 데이터베이스 파일의 백업을 만드는 중...
DB_BACKUP = Database Backup
DB_CHECKPOINT = Database Checkpoint
DB_MAINTENANCE = 데이터베이스 유지 관리
EXIT = 종료
LITE_NODE = 라이트 노드
MINTING_DISABLED = 민팅중이 아님
MINTING_ENABLED = \u2714 민팅
OPEN_UI = UI 열기
PERFORMING_DB_CHECKPOINT = 커밋되지 않은 데이터베이스 변경 내용을 저장하는 중...
PERFORMING_DB_MAINTENANCE = 예약된 유지 관리 수행 중...
SYNCHRONIZE_CLOCK = 시간 동기화
SYNCHRONIZING_BLOCKCHAIN = 동기화중
SYNCHRONIZING_CLOCK = 시간 동기화

View File

@@ -0,0 +1,46 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
APPLYING_UPDATE_AND_RESTARTING = Aplicarea actualizarii automate si repornire...
AUTO_UPDATE = Actualizare automata
BLOCK_HEIGHT = dimensiune
BUILD_VERSION = versiunea compilatiei
CHECK_TIME_ACCURACY = verificare exactitate ora
CONNECTING = Se conecteaza
CONNECTION = conexiune
CONNECTIONS = conexiuni
CREATING_BACKUP_OF_DB_FILES = Se creaza copia bazei de date
DB_BACKUP = Copie baza de date
DB_CHECKPOINT = Punct de control al bazei de date
DB_MAINTENANCE = Database Maintenance
EXIT = iesire
LITE_NODE = Nod Lite
MINTING_DISABLED = nu produce moneda
MINTING_ENABLED = \u2714 Minting
OPEN_UI = Deschidere interfata utilizator IU
PERFORMING_DB_CHECKPOINT = Salvarea modificarilor nerealizate ale bazei de date...
PERFORMING_DB_MAINTENANCE = Efectuarea intretinerii programate<74>
SYNCHRONIZE_CLOCK = Sincronizare ceas
SYNCHRONIZING_BLOCKCHAIN = Sincronizare
SYNCHRONIZING_CLOCK = Se sincronizeaza ceasul

View File

@@ -25,7 +25,7 @@ DB_CHECKPOINT = Databaskontrollpunkt
DB_MAINTENANCE = Databasunderhåll
EXIT = Utgång
EXIT = Avsluta
MINTING_DISABLED = Präglar INTE

View File

@@ -0,0 +1,195 @@
#
ACCOUNT_ALREADY_EXISTS = 계정이 이미 존재합니다.
ACCOUNT_CANNOT_REWARD_SHARE = 계정이 보상을 공유할 수 없습니다.
ADDRESS_ABOVE_RATE_LIMIT = 주소가 지정된 속도 제한에 도달했습니다.
ADDRESS_BLOCKED = 이 주소는 차단되었습니다.
ALREADY_GROUP_ADMIN = 이미 그룹 관리자
ALREADY_GROUP_MEMBER = 이미 그룹 맴버
ALREADY_VOTED_FOR_THAT_OPTION = 이미 그 옵션에 투표했다.
ASSET_ALREADY_EXISTS = 자산이 이미 있습니다.
ASSET_DOES_NOT_EXIST = 자산이 존재하지 않습니다.
ASSET_DOES_NOT_MATCH_AT = 자산이 AT의 자산과 일치하지 않습니다.
ASSET_NOT_SPENDABLE = 자산을 사용할 수 없습니다.
AT_ALREADY_EXISTS = AT가 이미 있습니다.
AT_IS_FINISHED = AT가 완료되었습니다.
AT_UNKNOWN = 알 수 없는 AT
BAN_EXISTS = 금지가 이미 있습니다.
BAN_UNKNOWN = 금지 알 수 없음
BANNED_FROM_GROUP = 그룹에서 금지
BUYER_ALREADY_OWNER = 구매자는 이미 소유자입니다
CLOCK_NOT_SYNCED = 동기화되지 않은 시간
DUPLICATE_MESSAGE = 주소가 중복 메시지를 보냈습니다.
DUPLICATE_OPTION = 중복 옵션
GROUP_ALREADY_EXISTS = 그룹이 이미 존재합니다
GROUP_APPROVAL_DECIDED = 그룹 승인이 이미 결정되었습니다.
GROUP_APPROVAL_NOT_REQUIRED = 그룹 승인이 필요하지 않음
GROUP_DOES_NOT_EXIST = 그룹이 존재하지 않습니다
GROUP_ID_MISMATCH = 그룹 ID 불일치
GROUP_OWNER_CANNOT_LEAVE = 그룹 소유자는 그룹을 나갈 수 없습니다
HAVE_EQUALS_WANT = 소유 자산은 원하는 자산과 동일합니다.
INCORRECT_NONCE = 잘못된 PoW nonce
INSUFFICIENT_FEE = 부족한 수수료
INVALID_ADDRESS = 잘못된 주소
INVALID_AMOUNT = 유효하지 않은 금액
INVALID_ASSET_OWNER = 잘못된 자산 소유자
INVALID_AT_TRANSACTION = 유효하지 않은 AT 거래
INVALID_AT_TYPE_LENGTH = 잘못된 AT '유형' 길이
INVALID_BUT_OK = 유효하지 않지만 OK
INVALID_CREATION_BYTES = 잘못된 생성 바이트
INVALID_DATA_LENGTH = 잘못된 데이터 길이
INVALID_DESCRIPTION_LENGTH = 잘못된 설명 길이
INVALID_GROUP_APPROVAL_THRESHOLD = 잘못된 그룹 승인 임계값
INVALID_GROUP_BLOCK_DELAY = 잘못된 그룹 승인 차단 지연
INVALID_GROUP_ID = 잘못된 그룹 ID
INVALID_GROUP_OWNER = 잘못된 그룹 소유자
INVALID_LIFETIME = 유효하지 않은 수명
INVALID_NAME_LENGTH = 잘못된 이름 길이
INVALID_NAME_OWNER = 잘못된 이름 소유자
INVALID_OPTION_LENGTH = 잘못된 옵션 길이
INVALID_OPTIONS_COUNT = 잘못된 옵션 수
INVALID_ORDER_CREATOR = 잘못된 주문 생성자
INVALID_PAYMENTS_COUNT = 유효하지 않은 지불 수
INVALID_PUBLIC_KEY = 잘못된 공개 키
INVALID_QUANTITY = 유효하지 않은 수량
INVALID_REFERENCE = 잘못된 참조
INVALID_RETURN = 무효 반환
INVALID_REWARD_SHARE_PERCENT = 잘못된 보상 공유 비율
INVALID_SELLER = 무효 판매자
INVALID_TAGS_LENGTH = invalid 'tags' length
INVALID_TIMESTAMP_SIGNATURE = 유효하지 않은 타임스탬프 서명
INVALID_TX_GROUP_ID = 잘못된 트랜잭션 그룹 ID
INVALID_VALUE_LENGTH = 잘못된 '값' 길이
INVITE_UNKNOWN = 알 수 없는 그룹 초대
JOIN_REQUEST_EXISTS = 그룹 가입 요청이 이미 있습니다.
MAXIMUM_REWARD_SHARES = 이미 이 계정에 대한 최대 보상 공유 수에 도달했습니다.t
MISSING_CREATOR = 실종된 창작자
MULTIPLE_NAMES_FORBIDDEN = 계정당 여러 등록 이름은 금지되어 있습니다.
NAME_ALREADY_FOR_SALE = 이미 판매 중인 이름
NAME_ALREADY_REGISTERED = 이미 등록된 이름
NAME_BLOCKED = 이 이름은 차단되었습니다
NAME_DOES_NOT_EXIST = 이름이 존재하지 않습니다
NAME_NOT_FOR_SALE = 이름은 판매용이 아닙니다
NAME_NOT_NORMALIZED = 유니코드 '정규화된' 형식이 아닌 이름
NEGATIVE_AMOUNT = 유효하지 않은/음수 금액
NEGATIVE_FEE = 무효/음수 수수료
NEGATIVE_PRICE = 유효하지 않은/음수 가격
NO_BALANCE = 잔액 불충분
NO_BLOCKCHAIN_LOCK = 노드의 블록체인이 현재 사용 중입니다.
NO_FLAG_PERMISSION = 계정에 해당 권한이 없습니다
NOT_GROUP_ADMIN = 계정은 그룹 관리자가 아닙니다.
NOT_GROUP_MEMBER = 계정이 그룹 구성원이 아닙니다.
NOT_MINTING_ACCOUNT = 계정은 발행할 수 없습니다
NOT_YET_RELEASED = 아직 출시되지 않은 기능
OK = OK
ORDER_ALREADY_CLOSED = 아직 출시되지 않은 기능
ORDER_DOES_NOT_EXIST = 자산 거래 주문이 존재하지 않습니다
POLL_ALREADY_EXISTS = 설문조사가 이미 존재합니다
POLL_DOES_NOT_EXIST = 설문조사가 존재하지 않습니다
POLL_OPTION_DOES_NOT_EXIST = 투표 옵션이 존재하지 않습니다
PUBLIC_KEY_UNKNOWN = 공개 키 알 수 없음
REWARD_SHARE_UNKNOWN = 알 수 없는 보상 공유
SELF_SHARE_EXISTS = 자체 공유(보상 공유)가 이미 존재합니다.
TIMESTAMP_TOO_NEW = 타임스탬프가 너무 새롭습니다.
TIMESTAMP_TOO_OLD = 너무 오래된 타임스탬프
TOO_MANY_UNCONFIRMED = 계정에 보류 중인 확인되지 않은 거래가 너무 많습니다.
TRANSACTION_ALREADY_CONFIRMED = 거래가 이미 확인되었습니다
TRANSACTION_ALREADY_EXISTS = 거래가 이미 존재합니다
TRANSACTION_UNKNOWN = 알 수 없는 거래
TX_GROUP_ID_MISMATCH = 트랜잭션의 그룹 ID가 일치하지 않습니다

View File

@@ -0,0 +1,195 @@
#
ACCOUNT_ALREADY_EXISTS = contul exista deja
ACCOUNT_CANNOT_REWARD_SHARE = contul nu poate genera reward-share
ADDRESS_ABOVE_RATE_LIMIT = adresa a atins limita specificata
ADDRESS_BLOCKED = aceasta adresa este blocata
ALREADY_GROUP_ADMIN = sunteti deja admin
ALREADY_GROUP_MEMBER = sunteti deja membru
ALREADY_VOTED_FOR_THAT_OPTION = deja ati votat pentru aceasta optiune
ASSET_ALREADY_EXISTS = activul deja exista
ASSET_DOES_NOT_EXIST = activul un exista
ASSET_DOES_NOT_MATCH_AT = activul nu se potriveste cu activul TA
ASSET_NOT_SPENDABLE = activul nu poate fi utilizat
AT_ALREADY_EXISTS = TA exista deja
AT_IS_FINISHED = TA s-a terminat
AT_UNKNOWN = TA necunoscuta
BAN_EXISTS = ban-ul este deja folosit
BAN_UNKNOWN = ban necunoscut
BANNED_FROM_GROUP = accesul la grup a fost blocat
BUYER_ALREADY_OWNER = cumparatorul este deja detinator
CLOCK_NOT_SYNCED = ceasul nu este sincronizat
DUPLICATE_MESSAGE = adresa a trimis mesaje duplicate
DUPLICATE_OPTION = optiune duplicata
GROUP_ALREADY_EXISTS = grupul deja exista
GROUP_APPROVAL_DECIDED = aprobarea grupului a fost deja decisa
GROUP_APPROVAL_NOT_REQUIRED = aprobarea grupului nu este solicitata
GROUP_DOES_NOT_EXIST = grupul nu exista
GROUP_ID_MISMATCH = ID-ul grupului incorect
GROUP_OWNER_CANNOT_LEAVE = proprietarul grupului nu poate parasi grupul
HAVE_EQUALS_WANT = a avea un obiect este acelasi lucru cu a vrea un obiect
INCORRECT_NONCE = numar PoW incorect
INSUFFICIENT_FEE = taxa insuficienta
INVALID_ADDRESS = adresa invalida
INVALID_AMOUNT = suma invalida
INVALID_ASSET_OWNER = propietar al activului invalid
INVALID_AT_TRANSACTION = tranzactie automata invalida
INVALID_AT_TYPE_LENGTH = TA invalida 'tip' lungime
INVALID_BUT_OK = invalid dar OK
INVALID_CREATION_BYTES = octeti de creatie invalizi
INVALID_DATA_LENGTH = lungimea datelor invalida
INVALID_DESCRIPTION_LENGTH = lungimea descrierii invalida
INVALID_GROUP_APPROVAL_THRESHOLD = prag de aprobare a grupului invalid
INVALID_GROUP_BLOCK_DELAY = intarziere invalida a blocului de aprobare a grupului
INVALID_GROUP_ID = ID de grup invalid
INVALID_GROUP_OWNER = proprietar de grup invalid
INVALID_LIFETIME = durata de viata invalida
INVALID_NAME_LENGTH = lungimea numelui invalida
INVALID_NAME_OWNER = numele proprietarului invalid
INVALID_OPTION_LENGTH = lungimea optiunii invalida
INVALID_OPTIONS_COUNT = contor de optiuni invalid
INVALID_ORDER_CREATOR = creator de ordine invalid
INVALID_PAYMENTS_COUNT = contor de plati invalid
INVALID_PUBLIC_KEY = cheie publica invalida
INVALID_QUANTITY = cantitate invalida
INVALID_REFERENCE = referinta invalida
INVALID_RETURN = returnare invalida
INVALID_REWARD_SHARE_PERCENT = procentaj al cotei de recompensa invalid
INVALID_SELLER = vanzator invalid
INVALID_TAGS_LENGTH = lungime a tagurilor invalida
INVALID_TIMESTAMP_SIGNATURE = semnatura timestamp invalida
INVALID_TX_GROUP_ID = ID-ul grupului de tranzactii invalid
INVALID_VALUE_LENGTH = lungimea "valorii "invalida
INVITE_UNKNOWN = invitatie de grup invalida
JOIN_REQUEST_EXISTS = cererea de aderare la grup exista deja
MAXIMUM_REWARD_SHARES = ati ajuns deja la numarul maxim de cote de recompensa pentru acest cont
MISSING_CREATOR = creator lipsa
MULTIPLE_NAMES_FORBIDDEN = este interzisa folosirea mai multor nume inregistrate pe cont
NAME_ALREADY_FOR_SALE = numele este deja de vanzare
NAME_ALREADY_REGISTERED = nume deja inregistrat
NAME_BLOCKED = numele este blocat
NAME_DOES_NOT_EXIST = numele nu exista
NAME_NOT_FOR_SALE = numele nu este de vanzare
NAME_NOT_NORMALIZED = numele nu este in forma "normalizata" Unicode
NEGATIVE_AMOUNT = suma invalida/negativa
NEGATIVE_FEE = taxa invalida/negativa
NEGATIVE_PRICE = pret invalid/negativ
NO_BALANCE = fonduri insuficiente
NO_BLOCKCHAIN_LOCK = nodul blochain-ului este momentan ocupat
NO_FLAG_PERMISSION = contul nu are aceasta permisiune
NOT_GROUP_ADMIN = contul nu este un administrator de grup
NOT_GROUP_MEMBER = contul nu este un membru al grupului
NOT_MINTING_ACCOUNT = contul nu poate genera moneda Qort
NOT_YET_RELEASED = caracteristica nu este inca disponibila
OK = OK
ORDER_ALREADY_CLOSED = ordinul de tranzactionare a activului este deja inchis
ORDER_DOES_NOT_EXIST = ordinul de comercializare a activului nu exista
POLL_ALREADY_EXISTS = sondajul exista deja
POLL_DOES_NOT_EXIST = sondajul nu exista
POLL_OPTION_DOES_NOT_EXIST = optiunea de sondaj nu exista
PUBLIC_KEY_UNKNOWN = cheie publica necunoscuta
REWARD_SHARE_UNKNOWN = cheie de cota de recompensa necunoscuta
SELF_SHARE_EXISTS = cota personala (cota de recompensa) exista deja
TIMESTAMP_TOO_NEW = timestamp prea nou
TIMESTAMP_TOO_OLD = timestamp prea vechi
TOO_MANY_UNCONFIRMED = contul are prea multe tranzactii neconfirmate in asteptare
TRANZACTIE_DEJA_CONFIRMATA = tranzactia a fost deja confirmata
TRANSACTION_ALREADY_EXISTS = tranzactia exista deja
TRANSACTION_UNKNOWN = tranzactie necunoscuta
TX_GROUP_ID_MISMATCH = ID-ul de grup al tranzactiei nu se potriveste

View File

@@ -20,6 +20,7 @@ import org.qortal.test.common.Common;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformation;
import org.qortal.utils.BlockArchiveUtils;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
@@ -123,8 +124,8 @@ public class BlockArchiveTests extends Common {
// Read block 2 from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getA();
BlockTransformation block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getBlockData();
// Read block 2 from the repository
BlockData block2RepositoryData = repository.getBlockRepository().fromHeight(2);
@@ -137,8 +138,8 @@ public class BlockArchiveTests extends Common {
assertEquals(1, block2ArchiveData.getOnlineAccountsCount());
// Read block 900 from the archive
Triple<BlockData, List<TransactionData>, List<ATStateData>> block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getA();
BlockTransformation block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getBlockData();
// Read block 900 from the repository
BlockData block900RepositoryData = repository.getBlockRepository().fromHeight(900);
@@ -200,10 +201,10 @@ public class BlockArchiveTests extends Common {
// Read a block from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(testHeight);
BlockData archivedBlockData = blockInfo.getA();
ATStateData archivedAtStateData = blockInfo.getC().isEmpty() ? null : blockInfo.getC().get(0);
List<TransactionData> archivedTransactions = blockInfo.getB();
BlockTransformation blockInfo = reader.fetchBlockAtHeight(testHeight);
BlockData archivedBlockData = blockInfo.getBlockData();
ATStateData archivedAtStateData = blockInfo.getAtStates().isEmpty() ? null : blockInfo.getAtStates().get(0);
List<TransactionData> archivedTransactions = blockInfo.getTransactions();
// Read the same block from the repository
BlockData repositoryBlockData = repository.getBlockRepository().fromHeight(testHeight);
@@ -255,7 +256,7 @@ public class BlockArchiveTests extends Common {
// Check block 10 (unarchived)
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(10);
BlockTransformation blockInfo = reader.fetchBlockAtHeight(10);
assertNull(blockInfo);
}

View File

@@ -24,6 +24,7 @@ import org.qortal.test.common.TransactionUtils;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformation;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Base58;
@@ -121,10 +122,10 @@ public class BlockTests extends Common {
assertEquals(BlockTransformer.getDataLength(block), bytes.length);
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = BlockTransformer.fromBytes(bytes);
BlockTransformation blockInfo = BlockTransformer.fromBytes(bytes);
// Compare transactions
List<TransactionData> deserializedTransactions = blockInfo.getB();
List<TransactionData> deserializedTransactions = blockInfo.getTransactions();
assertEquals("Transaction count differs", blockData.getTransactionCount(), deserializedTransactions.size());
for (int i = 0; i < blockData.getTransactionCount(); ++i) {

View File

@@ -4,7 +4,7 @@ import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.crypto.AES;
import org.qortal.crypto.BouncyCastle25519;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.crypto.Crypto;
import org.qortal.test.common.Common;
import org.qortal.utils.Base58;
@@ -123,14 +123,14 @@ public class CryptoTests extends Common {
random.nextBytes(ed25519PrivateKey);
PrivateKeyAccount account = new PrivateKeyAccount(null, ed25519PrivateKey);
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(account.getPrivateKey());
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(account.getPrivateKey());
X25519PrivateKeyParameters x25519PrivateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
// Derive X25519 public key from X25519 private key
byte[] x25519PublicKeyFromPrivate = x25519PrivateKeyParams.generatePublicKey().getEncoded();
// Derive X25519 public key from Ed25519 public key
byte[] x25519PublicKeyFromEd25519 = BouncyCastle25519.toX25519PublicKey(account.getPublicKey());
byte[] x25519PublicKeyFromEd25519 = Qortal25519Extras.toX25519PublicKey(account.getPublicKey());
assertEquals(String.format("Public keys do not match, from private key %s", Base58.encode(ed25519PrivateKey)), Base58.encode(x25519PublicKeyFromPrivate), Base58.encode(x25519PublicKeyFromEd25519));
}
@@ -162,10 +162,10 @@ public class CryptoTests extends Common {
}
private static byte[] calcBCSharedSecret(byte[] ed25519PrivateKey, byte[] ed25519PublicKey) {
byte[] x25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(ed25519PrivateKey);
byte[] x25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(ed25519PrivateKey);
X25519PrivateKeyParameters privateKeyParams = new X25519PrivateKeyParameters(x25519PrivateKey, 0);
byte[] x25519PublicKey = BouncyCastle25519.toX25519PublicKey(ed25519PublicKey);
byte[] x25519PublicKey = Qortal25519Extras.toX25519PublicKey(ed25519PublicKey);
X25519PublicKeyParameters publicKeyParams = new X25519PublicKeyParameters(x25519PublicKey, 0);
byte[] sharedSecret = new byte[32];
@@ -186,10 +186,10 @@ public class CryptoTests extends Common {
final String expectedTheirX25519PublicKey = "ANjnZLRSzW9B1aVamiYGKP3XtBooU9tGGDjUiibUfzp2";
final String expectedSharedSecret = "DTMZYG96x8XZuGzDvHFByVLsXedimqtjiXHhXPVe58Ap";
byte[] ourX25519PrivateKey = BouncyCastle25519.toX25519PrivateKey(ourPrivateKey);
byte[] ourX25519PrivateKey = Qortal25519Extras.toX25519PrivateKey(ourPrivateKey);
assertEquals("X25519 private key incorrect", expectedOurX25519PrivateKey, Base58.encode(ourX25519PrivateKey));
byte[] theirX25519PublicKey = BouncyCastle25519.toX25519PublicKey(theirPublicKey);
byte[] theirX25519PublicKey = Qortal25519Extras.toX25519PublicKey(theirPublicKey);
assertEquals("X25519 public key incorrect", expectedTheirX25519PublicKey, Base58.encode(theirX25519PublicKey));
byte[] sharedSecret = calcBCSharedSecret(ourPrivateKey, theirPublicKey);

View File

@@ -0,0 +1,190 @@
package org.qortal.test;
import com.google.common.hash.HashCode;
import com.google.common.primitives.Bytes;
import com.google.common.primitives.Longs;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.Test;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.transform.Transformer;
import java.math.BigInteger;
import java.security.SecureRandom;
import java.security.Security;
import java.util.*;
import java.util.stream.Collectors;
import static org.junit.Assert.*;
public class SchnorrTests extends Qortal25519Extras {
static {
// This must go before any calls to LogManager/Logger
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
Security.insertProviderAt(new BouncyCastleProvider(), 0);
Security.insertProviderAt(new BouncyCastleJsseProvider(), 1);
}
private static final SecureRandom SECURE_RANDOM = new SecureRandom();
@Test
public void testConversion() {
// Scalar form
byte[] scalarA = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
System.out.printf("a: %s%n", HashCode.fromBytes(scalarA));
byte[] pointA = HashCode.fromString("5866666666666666666666666666666666666666666666666666666666666666".toLowerCase()).asBytes();
BigInteger expectedY = new BigInteger("46316835694926478169428394003475163141307993866256225615783033603165251855960");
PointAccum pointAccum = Qortal25519Extras.newPointAccum();
scalarMultBase(scalarA, pointAccum);
byte[] encoded = new byte[POINT_BYTES];
if (0 == encodePoint(pointAccum, encoded, 0))
fail("Point encoding failed");
System.out.printf("aG: %s%n", HashCode.fromBytes(encoded));
assertArrayEquals(pointA, encoded);
byte[] yBytes = new byte[POINT_BYTES];
System.arraycopy(encoded,0, yBytes, 0, encoded.length);
Bytes.reverse(yBytes);
System.out.printf("yBytes: %s%n", HashCode.fromBytes(yBytes));
BigInteger yBI = new BigInteger(yBytes);
System.out.printf("aG y: %s%n", yBI);
assertEquals(expectedY, yBI);
}
@Test
public void testAddition() {
/*
* 1G: b'5866666666666666666666666666666666666666666666666666666666666666'
* 2G: b'c9a3f86aae465f0e56513864510f3997561fa2c9e85ea21dc2292309f3cd6022'
* 3G: b'd4b4f5784868c3020403246717ec169ff79e26608ea126a1ab69ee77d1b16712'
*/
// Scalar form
byte[] s1 = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] s2 = HashCode.fromString("0200000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
// Point form
byte[] g1 = HashCode.fromString("5866666666666666666666666666666666666666666666666666666666666666".toLowerCase()).asBytes();
byte[] g2 = HashCode.fromString("c9a3f86aae465f0e56513864510f3997561fa2c9e85ea21dc2292309f3cd6022".toLowerCase()).asBytes();
byte[] g3 = HashCode.fromString("d4b4f5784868c3020403246717ec169ff79e26608ea126a1ab69ee77d1b16712".toLowerCase()).asBytes();
PointAccum p1 = Qortal25519Extras.newPointAccum();
scalarMultBase(s1, p1);
PointAccum p2 = Qortal25519Extras.newPointAccum();
scalarMultBase(s2, p2);
pointAdd(pointCopy(p1), p2);
byte[] encoded = new byte[POINT_BYTES];
if (0 == encodePoint(p2, encoded, 0))
fail("Point encoding failed");
System.out.printf("sum: %s%n", HashCode.fromBytes(encoded));
assertArrayEquals(g3, encoded);
}
@Test
public void testSimpleSign() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = signForAggregation(privateKey, message);
System.out.printf("signature: %s%n", HashCode.fromBytes(signature));
}
@Test
public void testSimpleVerify() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = HashCode.fromString("13e58e88f3df9e06637d2d5bbb814c028e3ba135494530b9d3b120bdb31168d62c70a37ae9cfba816fe6038ee1ce2fb521b95c4a91c7ff0bb1dd2e67733f2b0d".toLowerCase()).asBytes();
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
assertTrue(verifyAggregated(publicKey, signature, message));
}
@Test
public void testSimpleSignAndVerify() {
byte[] privateKey = HashCode.fromString("0100000000000000000000000000000000000000000000000000000000000000".toLowerCase()).asBytes();
byte[] message = HashCode.fromString("01234567".toLowerCase()).asBytes();
byte[] signature = signForAggregation(privateKey, message);
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
assertTrue(verifyAggregated(publicKey, signature, message));
}
@Test
public void testSimpleAggregate() {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(1);
byte[] aggregatePublicKey = aggregatePublicKeys(onlineAccounts.stream().map(OnlineAccountData::getPublicKey).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate public key: %s%n", HashCode.fromBytes(aggregatePublicKey));
byte[] aggregateSignature = aggregateSignatures(onlineAccounts.stream().map(OnlineAccountData::getSignature).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate signature: %s%n", HashCode.fromBytes(aggregateSignature));
OnlineAccountData onlineAccount = onlineAccounts.get(0);
assertArrayEquals(String.format("expected: %s, actual: %s", HashCode.fromBytes(onlineAccount.getPublicKey()), HashCode.fromBytes(aggregatePublicKey)), onlineAccount.getPublicKey(), aggregatePublicKey);
assertArrayEquals(String.format("expected: %s, actual: %s", HashCode.fromBytes(onlineAccount.getSignature()), HashCode.fromBytes(aggregateSignature)), onlineAccount.getSignature(), aggregateSignature);
// This is the crucial test:
long timestamp = onlineAccount.getTimestamp();
byte[] timestampBytes = Longs.toByteArray(timestamp);
assertTrue(verifyAggregated(aggregatePublicKey, aggregateSignature, timestampBytes));
}
@Test
public void testMultipleAggregate() {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(5000);
byte[] aggregatePublicKey = aggregatePublicKeys(onlineAccounts.stream().map(OnlineAccountData::getPublicKey).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate public key: %s%n", HashCode.fromBytes(aggregatePublicKey));
byte[] aggregateSignature = aggregateSignatures(onlineAccounts.stream().map(OnlineAccountData::getSignature).collect(Collectors.toUnmodifiableList()));
System.out.printf("Aggregate signature: %s%n", HashCode.fromBytes(aggregateSignature));
OnlineAccountData onlineAccount = onlineAccounts.get(0);
// This is the crucial test:
long timestamp = onlineAccount.getTimestamp();
byte[] timestampBytes = Longs.toByteArray(timestamp);
assertTrue(verifyAggregated(aggregatePublicKey, aggregateSignature, timestampBytes));
}
private List<OnlineAccountData> generateOnlineAccounts(int numAccounts) {
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
long timestamp = System.currentTimeMillis();
byte[] timestampBytes = Longs.toByteArray(timestamp);
for (int a = 0; a < numAccounts; ++a) {
byte[] privateKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
SECURE_RANDOM.nextBytes(privateKey);
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
Qortal25519Extras.generatePublicKey(privateKey, 0, publicKey, 0);
byte[] signature = signForAggregation(privateKey, timestampBytes);
onlineAccounts.add(new OnlineAccountData(timestamp, signature, publicKey));
}
return onlineAccounts;
}
}

View File

@@ -0,0 +1,165 @@
package org.qortal.test;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.data.transaction.PaymentTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
import org.qortal.test.common.transaction.TestTransaction;
import org.qortal.transaction.Transaction;
import java.util.Random;
import static org.junit.Assert.assertEquals;
public class TransactionReferenceTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
}
@Test
public void testInvalidRandomReferenceBeforeFeatureTrigger() throws DataException {
Random random = new Random();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
byte[] randomPrivateKey = new byte[32];
random.nextBytes(randomPrivateKey);
PrivateKeyAccount recipient = new PrivateKeyAccount(repository, randomPrivateKey);
// Create payment transaction data
TransactionData paymentTransactionData = new PaymentTransactionData(TestTransaction.generateBase(alice), recipient.getAddress(), 100000L);
// Set random reference
byte[] randomReference = new byte[64];
random.nextBytes(randomReference);
paymentTransactionData.setReference(randomReference);
Transaction paymentTransaction = Transaction.fromData(repository, paymentTransactionData);
// Transaction should be invalid due to random reference
Transaction.ValidationResult validationResult = paymentTransaction.isValidUnconfirmed();
assertEquals(Transaction.ValidationResult.INVALID_REFERENCE, validationResult);
}
}
@Test
public void testValidRandomReferenceAfterFeatureTrigger() throws DataException {
Common.useSettings("test-settings-v2-disable-reference.json");
Random random = new Random();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
byte[] randomPrivateKey = new byte[32];
random.nextBytes(randomPrivateKey);
PrivateKeyAccount recipient = new PrivateKeyAccount(repository, randomPrivateKey);
// Create payment transaction data
TransactionData paymentTransactionData = new PaymentTransactionData(TestTransaction.generateBase(alice), recipient.getAddress(), 100000L);
// Set random reference
byte[] randomReference = new byte[64];
random.nextBytes(randomReference);
paymentTransactionData.setReference(randomReference);
Transaction paymentTransaction = Transaction.fromData(repository, paymentTransactionData);
// Transaction should be valid, even with random reference, because reference checking is now disabled
Transaction.ValidationResult validationResult = paymentTransaction.isValidUnconfirmed();
assertEquals(Transaction.ValidationResult.OK, validationResult);
TransactionUtils.signAndImportValid(repository, paymentTransactionData, alice);
}
}
@Test
public void testNullReferenceAfterFeatureTrigger() throws DataException {
Common.useSettings("test-settings-v2-disable-reference.json");
Random random = new Random();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
byte[] randomPrivateKey = new byte[32];
random.nextBytes(randomPrivateKey);
PrivateKeyAccount recipient = new PrivateKeyAccount(repository, randomPrivateKey);
// Create payment transaction data
TransactionData paymentTransactionData = new PaymentTransactionData(TestTransaction.generateBase(alice), recipient.getAddress(), 100000L);
// Set null reference
paymentTransactionData.setReference(null);
Transaction paymentTransaction = Transaction.fromData(repository, paymentTransactionData);
// Transaction should be invalid, as we require a non-null reference
Transaction.ValidationResult validationResult = paymentTransaction.isValidUnconfirmed();
assertEquals(Transaction.ValidationResult.INVALID_REFERENCE, validationResult);
}
}
@Test
public void testShortReferenceAfterFeatureTrigger() throws DataException {
Common.useSettings("test-settings-v2-disable-reference.json");
Random random = new Random();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
byte[] randomPrivateKey = new byte[32];
random.nextBytes(randomPrivateKey);
PrivateKeyAccount recipient = new PrivateKeyAccount(repository, randomPrivateKey);
// Create payment transaction data
TransactionData paymentTransactionData = new PaymentTransactionData(TestTransaction.generateBase(alice), recipient.getAddress(), 100000L);
// Set a 1-byte reference
byte[] randomByte = new byte[63];
random.nextBytes(randomByte);
paymentTransactionData.setReference(randomByte);
Transaction paymentTransaction = Transaction.fromData(repository, paymentTransactionData);
// Transaction should be invalid, as reference isn't long enough
Transaction.ValidationResult validationResult = paymentTransaction.isValidUnconfirmed();
assertEquals(Transaction.ValidationResult.INVALID_REFERENCE, validationResult);
}
}
@Test
public void testLongReferenceAfterFeatureTrigger() throws DataException {
Common.useSettings("test-settings-v2-disable-reference.json");
Random random = new Random();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
byte[] randomPrivateKey = new byte[32];
random.nextBytes(randomPrivateKey);
PrivateKeyAccount recipient = new PrivateKeyAccount(repository, randomPrivateKey);
// Create payment transaction data
TransactionData paymentTransactionData = new PaymentTransactionData(TestTransaction.generateBase(alice), recipient.getAddress(), 100000L);
// Set a 1-byte reference
byte[] randomByte = new byte[65];
random.nextBytes(randomByte);
paymentTransactionData.setReference(randomByte);
Transaction paymentTransaction = Transaction.fromData(repository, paymentTransactionData);
// Transaction should be invalid, as reference is too long
Transaction.ValidationResult validationResult = paymentTransaction.isValidUnconfirmed();
assertEquals(Transaction.ValidationResult.INVALID_REFERENCE, validationResult);
}
}
}

View File

@@ -6,6 +6,7 @@ import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.utils.Base58;
public class RewardShareKeys {
@@ -28,7 +29,7 @@ public class RewardShareKeys {
PublicKeyAccount recipientAccount = new PublicKeyAccount(null, args.length > 1 ? Base58.decode(args[1]) : minterAccount.getPublicKey());
byte[] rewardSharePrivateKey = minterAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
byte[] rewardSharePublicKey = PrivateKeyAccount.toPublicKey(rewardSharePrivateKey);
byte[] rewardSharePublicKey = Crypto.toPublicKey(rewardSharePrivateKey);
System.out.println(String.format("Minter account: %s", minterAccount.getAddress()));
System.out.println(String.format("Minter's public key: %s", Base58.encode(minterAccount.getPublicKey())));

View File

@@ -6,6 +6,7 @@ import java.util.HashMap;
import java.util.Map;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.PaymentTransactionData;
import org.qortal.data.transaction.RewardShareTransactionData;
@@ -40,12 +41,15 @@ public class AccountUtils {
public static TransactionData createRewardShare(Repository repository, String minter, String recipient, int sharePercent) throws DataException {
PrivateKeyAccount mintingAccount = Common.getTestAccount(repository, minter);
PrivateKeyAccount recipientAccount = Common.getTestAccount(repository, recipient);
return createRewardShare(repository, mintingAccount, recipientAccount, sharePercent);
}
public static TransactionData createRewardShare(Repository repository, PrivateKeyAccount mintingAccount, PrivateKeyAccount recipientAccount, int sharePercent) throws DataException {
byte[] reference = mintingAccount.getLastReference();
long timestamp = repository.getTransactionRepository().fromSignature(reference).getTimestamp() + 1;
byte[] rewardSharePrivateKey = mintingAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
byte[] rewardSharePublicKey = PrivateKeyAccount.toPublicKey(rewardSharePrivateKey);
byte[] rewardSharePublicKey = Crypto.toPublicKey(rewardSharePrivateKey);
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, mintingAccount.getPublicKey(), fee, null);
TransactionData transactionData = new RewardShareTransactionData(baseTransactionData, recipientAccount.getAddress(), rewardSharePublicKey, sharePercent);
@@ -65,6 +69,15 @@ public class AccountUtils {
return rewardSharePrivateKey;
}
public static byte[] rewardShare(Repository repository, PrivateKeyAccount minterAccount, PrivateKeyAccount recipientAccount, int sharePercent) throws DataException {
TransactionData transactionData = createRewardShare(repository, minterAccount, recipientAccount, sharePercent);
TransactionUtils.signAndMint(repository, transactionData, minterAccount);
byte[] rewardSharePrivateKey = minterAccount.getRewardSharePrivateKey(recipientAccount.getPublicKey());
return rewardSharePrivateKey;
}
public static Map<String, Map<Long, Long>> getBalances(Repository repository, long... assetIds) throws DataException {
Map<String, Map<Long, Long>> balances = new HashMap<>();

View File

@@ -7,6 +7,7 @@ import java.math.BigDecimal;
import java.net.URL;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.SecureRandom;
import java.security.Security;
import java.util.ArrayList;
import java.util.Collections;
@@ -25,6 +26,7 @@ import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.asset.AssetData;
@@ -61,6 +63,7 @@ public class Common {
public static final String testSettingsFilename = "test-settings-v2.json";
public static boolean shouldRetainRepositoryAfterTest = false;
static {
// Load/check settings, which potentially sets up blockchain config, etc.
@@ -110,6 +113,12 @@ public class Common {
return testAccountsByName.values().stream().map(account -> new TestAccount(repository, account)).collect(Collectors.toList());
}
public static PrivateKeyAccount generateRandomSeedAccount(Repository repository) {
byte[] seed = new byte[32];
new SecureRandom().nextBytes(seed);
return new PrivateKeyAccount(repository, seed);
}
public static void useSettingsAndDb(String settingsFilename, boolean dbInMemory) throws DataException {
closeRepository();
@@ -126,6 +135,7 @@ public class Common {
public static void useSettings(String settingsFilename) throws DataException {
Common.useSettingsAndDb(settingsFilename, true);
setShouldRetainRepositoryAfterTest(false);
}
public static void useDefaultSettings() throws DataException {
@@ -207,7 +217,16 @@ public class Common {
RepositoryManager.setRepositoryFactory(repositoryFactory);
}
public static void setShouldRetainRepositoryAfterTest(boolean shouldRetain) {
shouldRetainRepositoryAfterTest = shouldRetain;
}
public static void deleteTestRepository() throws DataException {
if (shouldRetainRepositoryAfterTest) {
// Don't delete if we've requested to keep the db intact
return;
}
// Delete repository directory if exists
Path repositoryPath = Paths.get(Settings.getInstance().getRepositoryPath());
try {

View File

@@ -75,7 +75,7 @@ public class GetWalletTransactions {
System.out.println(String.format("Found %d transaction%s", transactions.size(), (transactions.size() != 1 ? "s" : "")));
for (SimpleTransaction transaction : transactions.stream().sorted(Comparator.comparingInt(SimpleTransaction::getTimestamp)).collect(Collectors.toList()))
for (SimpleTransaction transaction : transactions.stream().sorted(Comparator.comparingLong(SimpleTransaction::getTimestamp)).collect(Collectors.toList()))
System.out.println(String.format("%s", transaction));
}

View File

@@ -0,0 +1,63 @@
package org.qortal.test.minting;
import org.junit.Before;
import org.junit.Test;
import org.qortal.block.Block;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.transform.Transformer;
import org.qortal.utils.NTP;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class BlockTimestampTests extends Common {
private static class BlockTimestampDataPoint {
public byte[] minterPublicKey;
public int minterAccountLevel;
public long blockTimestamp;
}
private static final Random RANDOM = new Random();
@Before
public void beforeTest() throws DataException {
Common.useSettings("test-settings-v2-block-timestamps.json");
NTP.setFixedOffset(0L);
}
@Test
public void testTimestamps() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
Block parentBlock = BlockUtils.mintBlock(repository);
BlockData parentBlockData = parentBlock.getBlockData();
// Generate lots of test minters
List<BlockTimestampDataPoint> dataPoints = new ArrayList<>();
for (int i = 0; i < 20; i++) {
BlockTimestampDataPoint dataPoint = new BlockTimestampDataPoint();
dataPoint.minterPublicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
RANDOM.nextBytes(dataPoint.minterPublicKey);
dataPoint.minterAccountLevel = RANDOM.nextInt(5) + 5;
dataPoint.blockTimestamp = Block.calcTimestamp(parentBlockData, dataPoint.minterPublicKey, dataPoint.minterAccountLevel);
System.out.printf("[%d] level %d, blockTimestamp %d - parentTimestamp %d = %d%n",
i,
dataPoint.minterAccountLevel,
dataPoint.blockTimestamp,
parentBlockData.getTimestamp(),
dataPoint.blockTimestamp - parentBlockData.getTimestamp()
);
}
}
}
}

View File

@@ -177,4 +177,143 @@ public class RewardShareTests extends Common {
}
}
@Test
public void testCreateRewardSharesBeforeReduction() throws DataException {
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 6 reward shares
for (int i=0; i<6; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 7th reward share should fail because we've reached the limit (and we're not yet requiring a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 2 reward shares
for (int i=0; i<2; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 3rd reward share should fail because we've reached the limit (and we haven't got a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateSelfAndRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount dilbertAccount = Common.getTestAccount(repository, "dilbert");
// Create 2 reward shares
for (int i=0; i<2; i++) {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 3rd reward share should fail because we've reached the limit (and we haven't got a self share)
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
// Now create a self share, which should succeed as we have space for it
AccountUtils.rewardShare(repository, dilbertAccount, dilbertAccount, sharePercent);
// 4th reward share should fail because we've reached the limit (including the self share)
assertionError = null;
try {
AccountUtils.rewardShare(repository, dilbertAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateFounderRewardSharesBeforeReduction() throws DataException {
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount aliceFounderAccount = Common.getTestAccount(repository, "alice");
// Create 5 reward shares (not 6, because alice already starts with a self reward share in the genesis block)
for (int i=0; i<5; i++) {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 6th reward share should fail
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
@Test
public void testCreateFounderRewardSharesAfterReduction() throws DataException {
Common.useSettings("test-settings-v2-reward-shares.json");
final int sharePercent = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount aliceFounderAccount = Common.getTestAccount(repository, "alice");
// Create 5 reward shares (not 6, because alice already starts with a self reward share in the genesis block)
for (int i=0; i<5; i++) {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
}
// 6th reward share should fail
AssertionError assertionError = null;
try {
AccountUtils.rewardShare(repository, aliceFounderAccount, Common.generateRandomSeedAccount(repository), sharePercent);
} catch (AssertionError e) {
assertionError = e;
}
assertNotNull("Transaction should be invalid", assertionError);
assertTrue("Transaction should be invalid due to reaching maximum reward shares", assertionError.getMessage().contains("MAXIMUM_REWARD_SHARES"));
}
}
}

View File

@@ -3,10 +3,9 @@ package org.qortal.test.minting;
import static org.junit.Assert.*;
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.*;
import org.apache.commons.lang3.reflect.FieldUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.junit.After;
@@ -702,6 +701,15 @@ public class RewardTests extends Common {
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel7And8Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel7And8Reward);
// Orphan and ensure balances return to their previous values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
}
}
@@ -787,6 +795,348 @@ public class RewardTests extends Common {
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel9And10Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel9And10Reward);
// Orphan and ensure balances return to their previous values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
}
}
/** Test rewards for level 7 and 8 accounts, when the tier doesn't yet have enough minters in it */
@Test
public void testLevel7And8RewardsPreActivation() throws DataException, IllegalAccessException {
Common.useSettings("test-settings-v2-reward-levels.json");
// Set minAccountsToActivateShareBin to 3 so that share bins 7-8 and 9-10 are considered inactive
FieldUtils.writeField(BlockChain.getInstance(), "minAccountsToActivateShareBin", 3, true);
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share NOT online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 7 and 8
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(8) - 20; // 20 blocks before level 8, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect
assertEquals(7, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(8, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 7 or 8 (except Bob who has only just started minting, so is at level 1), we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Chloe, and Dilbert are 'online'.
* Chloe is level 7; Dilbert is level 8.
* One founder online (Alice, who is also level 7).
* No legacy QORA holders.
*
* Level 7 and 8 is not yet activated, so its rewards are added to the level 5 and 6 share bin.
* There are no level 5 and 6 online.
* Chloe and Dilbert should receive equal shares of the 35% block reward for levels 5 to 8.
* Alice should receive the remainder (65%).
*/
final int level5To8SharePercent = 35_00; // 35% (combined 15% and 20%)
final long level5To8ShareAmount = (blockReward * level5To8SharePercent) / 100L / 100L;
final long expectedLevel5To8Reward = level5To8ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level5To8ShareAmount; // Alice should receive the remainder
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel5To8Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel5To8Reward);
// Orphan and ensure balances return to their previous values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
}
}
/** Test rewards for level 9 and 10 accounts, when the tier doesn't yet have enough minters in it.
* Tier 7-8 isn't activated either, so the rewards and minters are all moved to tier 5-6. */
@Test
public void testLevel9And10RewardsPreActivation() throws DataException, IllegalAccessException {
Common.useSettings("test-settings-v2-reward-levels.json");
// Set minAccountsToActivateShareBin to 3 so that share bins 7-8 and 9-10 are considered inactive
FieldUtils.writeField(BlockChain.getInstance(), "minAccountsToActivateShareBin", 3, true);
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share not initially online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump testAccount levels to 9 and 10
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(10) - 20; // 20 blocks before level 10, so that the test accounts reach the correct levels
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Bob self-share now comes online
byte[] bobRewardSharePrivateKey = AccountUtils.rewardShare(repository, "bob", "bob", 0);
PrivateKeyAccount bobRewardShareAccount = new PrivateKeyAccount(repository, bobRewardSharePrivateKey);
mintingAndOnlineAccounts.add(bobRewardShareAccount);
// Ensure that the levels are as we expect
assertEquals(9, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(9, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(10, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that everyone is at level 7 or 8 (except Bob who has only just started minting, so is at level 1), we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
final long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Bob, Chloe, and Dilbert are 'online'.
* Bob is level 1; Chloe is level 9; Dilbert is level 10.
* One founder online (Alice, who is also level 9).
* No legacy QORA holders.
*
* Levels 7+8, and 9+10 are not yet activated, so their rewards are added to the level 5 and 6 share bin.
* There are no levels 5-8 online.
* Chloe and Dilbert should receive equal shares of the 60% block reward for levels 5 to 10.
* Alice should receive the remainder (40%).
*/
final int level1And2SharePercent = 5_00; // 5%
final int level5To10SharePercent = 60_00; // 60% (combined 15%, 20%, and 25%)
final long level1And2ShareAmount = (blockReward * level1And2SharePercent) / 100L / 100L;
final long level5To10ShareAmount = (blockReward * level5To10SharePercent) / 100L / 100L;
final long expectedLevel1And2Reward = level1And2ShareAmount; // The reward is given entirely to Bob
final long expectedLevel5To10Reward = level5To10ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level1And2ShareAmount - level5To10ShareAmount; // Alice should receive the remainder
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance+expectedLevel1And2Reward);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel5To10Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel5To10Reward);
// Orphan and ensure balances return to their previous values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
}
}
/** Test rewards for level 7 and 8 accounts, when the tier reaches the minimum number of accounts */
@Test
public void testLevel7And8RewardsPreAndPostActivation() throws DataException, IllegalAccessException {
Common.useSettings("test-settings-v2-reward-levels.json");
// Set minAccountsToActivateShareBin to 2 so that share bins 7-8 and 9-10 are considered inactive at first
FieldUtils.writeField(BlockChain.getInstance(), "minAccountsToActivateShareBin", 2, true);
try (final Repository repository = RepositoryManager.getRepository()) {
List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
// Alice self share online
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Bob self-share NOT online
// Chloe self share online
byte[] chloeRewardSharePrivateKey = AccountUtils.rewardShare(repository, "chloe", "chloe", 0);
PrivateKeyAccount chloeRewardShareAccount = new PrivateKeyAccount(repository, chloeRewardSharePrivateKey);
mintingAndOnlineAccounts.add(chloeRewardShareAccount);
// Dilbert self share online
byte[] dilbertRewardSharePrivateKey = AccountUtils.rewardShare(repository, "dilbert", "dilbert", 0);
PrivateKeyAccount dilbertRewardShareAccount = new PrivateKeyAccount(repository, dilbertRewardSharePrivateKey);
mintingAndOnlineAccounts.add(dilbertRewardShareAccount);
// Mint enough blocks to bump two of the testAccount levels to 7
final int minterBlocksNeeded = cumulativeBlocksByLevel.get(7) - 12; // 12 blocks before level 7, so that dilbert and alice have reached level 7, but chloe will reach it in the next 2 blocks
for (int bc = 0; bc < minterBlocksNeeded; ++bc)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect
assertEquals(7, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(6, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Now that dilbert has reached level 7, we can capture initial balances
Map<String, Map<Long, Long>> initialBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long aliceInitialBalance = initialBalances.get("alice").get(Asset.QORT);
final long bobInitialBalance = initialBalances.get("bob").get(Asset.QORT);
final long chloeInitialBalance = initialBalances.get("chloe").get(Asset.QORT);
final long dilbertInitialBalance = initialBalances.get("dilbert").get(Asset.QORT);
// Mint a block
long blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure we are using the correct block reward value
assertEquals(100000000L, blockReward);
/*
* Alice, Chloe, and Dilbert are 'online'.
* Chloe is level 6; Dilbert is level 7.
* One founder online (Alice, who is also level 7).
* No legacy QORA holders.
*
* Level 7 and 8 is not yet activated, so its rewards are added to the level 5 and 6 share bin.
* There are no level 5 and 6 online.
* Chloe and Dilbert should receive equal shares of the 35% block reward for levels 5 to 8.
* Alice should receive the remainder (65%).
*/
final int level5To8SharePercent = 35_00; // 35% (combined 15% and 20%)
final long level5To8ShareAmount = (blockReward * level5To8SharePercent) / 100L / 100L;
final long expectedLevel5To8Reward = level5To8ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long expectedFounderReward = blockReward - level5To8ShareAmount; // Alice should receive the remainder
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance+expectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance+expectedLevel5To8Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance+expectedLevel5To8Reward);
// Ensure that the levels are as we expect
assertEquals(7, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(6, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "dilbert").getLevel());
// Capture pre-activation balances
Map<String, Map<Long, Long>> preActivationBalances = AccountUtils.getBalances(repository, Asset.QORT, Asset.LEGACY_QORA, Asset.QORT_FROM_QORA);
final long alicePreActivationBalance = preActivationBalances.get("alice").get(Asset.QORT);
final long bobPreActivationBalance = preActivationBalances.get("bob").get(Asset.QORT);
final long chloePreActivationBalance = preActivationBalances.get("chloe").get(Asset.QORT);
final long dilbertPreActivationBalance = preActivationBalances.get("dilbert").get(Asset.QORT);
// Mint another block
blockReward = BlockUtils.getNextBlockReward(repository);
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Ensure that the levels are as we expect (chloe has now increased to level 7; level 7-8 is now activated)
assertEquals(7, (int) Common.getTestAccount(repository, "alice").getLevel());
assertEquals(1, (int) Common.getTestAccount(repository, "bob").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "chloe").getLevel());
assertEquals(7, (int) Common.getTestAccount(repository, "dilbert").getLevel());
/*
* Alice, Chloe, and Dilbert are 'online'.
* Chloe and Dilbert are level 7.
* One founder online (Alice, who is also level 7).
* No legacy QORA holders.
*
* Level 7 and 8 is now activated, so its rewards are paid out in the normal way.
* There are no level 5 and 6 online.
* Chloe and Dilbert should receive equal shares of the 20% block reward for levels 7 to 8.
* Alice should receive the remainder (80%).
*/
final int level7To8SharePercent = 20_00; // 20%
final long level7To8ShareAmount = (blockReward * level7To8SharePercent) / 100L / 100L;
final long expectedLevel7To8Reward = level7To8ShareAmount / 2; // The reward is split between Chloe and Dilbert
final long newExpectedFounderReward = blockReward - level7To8ShareAmount; // Alice should receive the remainder
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, alicePreActivationBalance+newExpectedFounderReward);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobPreActivationBalance); // Bob not online so his balance remains the same
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloePreActivationBalance+expectedLevel7To8Reward);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertPreActivationBalance+expectedLevel7To8Reward);
// Orphan and ensure balances return to their pre-activation values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, alicePreActivationBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobPreActivationBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloePreActivationBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertPreActivationBalance);
// Orphan again and ensure balances return to their initial values
BlockUtils.orphanBlocks(repository, 1);
// Validate the balances
AccountUtils.assertBalance(repository, "alice", Asset.QORT, aliceInitialBalance);
AccountUtils.assertBalance(repository, "bob", Asset.QORT, bobInitialBalance);
AccountUtils.assertBalance(repository, "chloe", Asset.QORT, chloeInitialBalance);
AccountUtils.assertBalance(repository, "dilbert", Asset.QORT, dilbertInitialBalance);
}
}

View File

@@ -509,6 +509,7 @@ public class IntegrityTests extends Common {
@Ignore("Checks 'live' repository")
@Test
public void testRepository() throws DataException {
Common.setShouldRetainRepositoryAfterTest(true);
Settings.fileInstance("settings.json"); // use 'live' settings
String repositoryUrlTemplate = "jdbc:hsqldb:file:%s" + File.separator + "blockchain;create=false;hsqldb.full_log_replay=true";

View File

@@ -1,22 +1,36 @@
package org.qortal.test.network;
import org.apache.commons.lang3.reflect.FieldUtils;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.controller.BlockMinter;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.test.common.Common;
import org.qortal.transform.Transformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.security.Security;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.*;
public class OnlineAccountsTests {
public class OnlineAccountsTests extends Common {
private static final Random RANDOM = new Random();
static {
@@ -27,6 +41,12 @@ public class OnlineAccountsTests {
Security.insertProviderAt(new BouncyCastleJsseProvider(), 1);
}
@Before
public void beforeTest() throws DataException, IOException {
Common.useSettingsAndDb("test-settings-v2-no-sig-agg.json", false);
NTP.setFixedOffset(Settings.getInstance().getTestNtpOffset());
}
@Test
public void testGetOnlineAccountsV2() throws MessageException {
@@ -111,4 +131,75 @@ public class OnlineAccountsTests {
return onlineAccounts;
}
@Test
public void testOnlineAccountsModulusV1() throws IllegalAccessException, DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Set feature trigger timestamp to MAX long so that it is inactive
FieldUtils.writeField(BlockChain.getInstance(), "onlineAccountsModulusV2Timestamp", Long.MAX_VALUE, true);
List<String> onlineAccountSignatures = new ArrayList<>();
long fakeNTPOffset = 0L;
// Mint a block and store its timestamp
Block block = BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
long lastBlockTimestamp = block.getBlockData().getTimestamp();
// Mint some blocks and keep track of the different online account signatures
for (int i = 0; i < 30; i++) {
block = BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Increase NTP fixed offset by the block time, to simulate time passing
long blockTimeDelta = block.getBlockData().getTimestamp() - lastBlockTimestamp;
lastBlockTimestamp = block.getBlockData().getTimestamp();
fakeNTPOffset += blockTimeDelta;
NTP.setFixedOffset(fakeNTPOffset);
String lastOnlineAccountSignatures58 = Base58.encode(block.getBlockData().getOnlineAccountsSignatures());
if (!onlineAccountSignatures.contains(lastOnlineAccountSignatures58)) {
onlineAccountSignatures.add(lastOnlineAccountSignatures58);
}
}
// We expect at least 6 unique signatures over 30 blocks (generally 6-8, but could be higher due to block time differences)
System.out.println(String.format("onlineAccountSignatures count: %d", onlineAccountSignatures.size()));
assertTrue(onlineAccountSignatures.size() >= 6);
}
}
@Test
public void testOnlineAccountsModulusV2() throws IllegalAccessException, DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Set feature trigger timestamp to 0 so that it is active
FieldUtils.writeField(BlockChain.getInstance(), "onlineAccountsModulusV2Timestamp", 0L, true);
List<String> onlineAccountSignatures = new ArrayList<>();
long fakeNTPOffset = 0L;
// Mint a block and store its timestamp
Block block = BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
long lastBlockTimestamp = block.getBlockData().getTimestamp();
// Mint some blocks and keep track of the different online account signatures
for (int i = 0; i < 30; i++) {
block = BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Increase NTP fixed offset by the block time, to simulate time passing
long blockTimeDelta = block.getBlockData().getTimestamp() - lastBlockTimestamp;
lastBlockTimestamp = block.getBlockData().getTimestamp();
fakeNTPOffset += blockTimeDelta;
NTP.setFixedOffset(fakeNTPOffset);
String lastOnlineAccountSignatures58 = Base58.encode(block.getBlockData().getOnlineAccountsSignatures());
if (!onlineAccountSignatures.contains(lastOnlineAccountSignatures58)) {
onlineAccountSignatures.add(lastOnlineAccountSignatures58);
}
}
// We expect 1-3 unique signatures over 30 blocks
System.out.println(String.format("onlineAccountSignatures count: %d", onlineAccountSignatures.size()));
assertTrue(onlineAccountSignatures.size() >= 1 && onlineAccountSignatures.size() <= 3);
}
}
}

View File

@@ -0,0 +1,210 @@
package org.qortal.test.network;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.message.*;
import org.qortal.transform.Transformer;
import java.nio.ByteBuffer;
import java.security.Security;
import java.util.*;
import static org.junit.Assert.*;
public class OnlineAccountsV3Tests {
private static final Random RANDOM = new Random();
static {
// This must go before any calls to LogManager/Logger
System.setProperty("java.util.logging.manager", "org.apache.logging.log4j.jul.LogManager");
Security.insertProviderAt(new BouncyCastleProvider(), 0);
Security.insertProviderAt(new BouncyCastleJsseProvider(), 1);
}
@Ignore("For informational use")
@Test
public void compareV2ToV3() throws MessageException {
List<OnlineAccountData> onlineAccounts = generateOnlineAccounts(false);
// How many of each timestamp and leading byte (of public key)
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = convertToHashMaps(onlineAccounts);
byte[] v3DataBytes = new GetOnlineAccountsV3Message(hashesByTimestampThenByte).toBytes();
int v3ByteSize = v3DataBytes.length;
byte[] v2DataBytes = new GetOnlineAccountsV2Message(onlineAccounts).toBytes();
int v2ByteSize = v2DataBytes.length;
int numTimestamps = hashesByTimestampThenByte.size();
System.out.printf("For %d accounts split across %d timestamp%s: V2 size %d vs V3 size %d%n",
onlineAccounts.size(),
numTimestamps,
numTimestamps != 1 ? "s" : "",
v2ByteSize,
v3ByteSize
);
for (var outerMapEntry : hashesByTimestampThenByte.entrySet()) {
long timestamp = outerMapEntry.getKey();
var innerMap = outerMapEntry.getValue();
System.out.printf("For timestamp %d: %d / 256 slots used.%n",
timestamp,
innerMap.size()
);
}
}
private Map<Long, Map<Byte, byte[]>> convertToHashMaps(List<OnlineAccountData> onlineAccounts) {
// How many of each timestamp and leading byte (of public key)
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByte = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
Long timestamp = onlineAccountData.getTimestamp();
Byte leadingByte = onlineAccountData.getPublicKey()[0];
hashesByTimestampThenByte
.computeIfAbsent(timestamp, k -> new HashMap<>())
.compute(leadingByte, (k, v) -> OnlineAccountsManager.xorByteArrayInPlace(v, onlineAccountData.getPublicKey()));
}
return hashesByTimestampThenByte;
}
@Test
public void testOnGetOnlineAccountsV3() {
List<OnlineAccountData> ourOnlineAccounts = generateOnlineAccounts(false);
List<OnlineAccountData> peersOnlineAccounts = generateOnlineAccounts(false);
Map<Long, Map<Byte, byte[]>> ourConvertedHashes = convertToHashMaps(ourOnlineAccounts);
Map<Long, Map<Byte, byte[]>> peersConvertedHashes = convertToHashMaps(peersOnlineAccounts);
List<String> mockReply = new ArrayList<>();
// Warning: no double-checking/fetching - we must be ConcurrentMap compatible!
// So no contains()-then-get() or multiple get()s on the same key/map.
for (var ourOuterMapEntry : ourConvertedHashes.entrySet()) {
Long timestamp = ourOuterMapEntry.getKey();
var ourInnerMap = ourOuterMapEntry.getValue();
var peersInnerMap = peersConvertedHashes.get(timestamp);
if (peersInnerMap == null) {
// Peer doesn't have this timestamp, so if it's valid (i.e. not too old) then we'd have to send all of ours
for (Byte leadingByte : ourInnerMap.keySet())
mockReply.add(timestamp + ":" + leadingByte);
} else {
// We have entries for this timestamp so compare against peer's entries
for (var ourInnerMapEntry : ourInnerMap.entrySet()) {
Byte leadingByte = ourInnerMapEntry.getKey();
byte[] peersHash = peersInnerMap.get(leadingByte);
if (!Arrays.equals(ourInnerMapEntry.getValue(), peersHash)) {
// We don't match peer, or peer doesn't have - send all online accounts for this timestamp and leading byte
mockReply.add(timestamp + ":" + leadingByte);
}
}
}
}
int numOurTimestamps = ourConvertedHashes.size();
System.out.printf("We have %d accounts split across %d timestamp%s%n",
ourOnlineAccounts.size(),
numOurTimestamps,
numOurTimestamps != 1 ? "s" : ""
);
int numPeerTimestamps = peersConvertedHashes.size();
System.out.printf("Peer sent %d accounts split across %d timestamp%s%n",
peersOnlineAccounts.size(),
numPeerTimestamps,
numPeerTimestamps != 1 ? "s" : ""
);
System.out.printf("We need to send: %d%n%s%n", mockReply.size(), String.join(", ", mockReply));
}
@Test
public void testSerialization() throws MessageException {
List<OnlineAccountData> onlineAccountsOut = generateOnlineAccounts(true);
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut = convertToHashMaps(onlineAccountsOut);
validateSerialization(hashesByTimestampThenByteOut);
}
@Test
public void testEmptySerialization() throws MessageException {
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut = Collections.emptyMap();
validateSerialization(hashesByTimestampThenByteOut);
hashesByTimestampThenByteOut = new HashMap<>();
validateSerialization(hashesByTimestampThenByteOut);
}
private void validateSerialization(Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteOut) throws MessageException {
Message messageOut = new GetOnlineAccountsV3Message(hashesByTimestampThenByteOut);
byte[] messageBytes = messageOut.toBytes();
ByteBuffer byteBuffer = ByteBuffer.wrap(messageBytes).asReadOnlyBuffer();
GetOnlineAccountsV3Message messageIn = (GetOnlineAccountsV3Message) Message.fromByteBuffer(byteBuffer);
Map<Long, Map<Byte, byte[]>> hashesByTimestampThenByteIn = messageIn.getHashesByTimestampThenByte();
Set<Long> timestampsIn = hashesByTimestampThenByteIn.keySet();
Set<Long> timestampsOut = hashesByTimestampThenByteOut.keySet();
assertEquals("timestamp count mismatch", timestampsOut.size(), timestampsIn.size());
assertTrue("timestamps mismatch", timestampsIn.containsAll(timestampsOut));
for (Long timestamp : timestampsIn) {
Map<Byte, byte[]> hashesByByteIn = hashesByTimestampThenByteIn.get(timestamp);
Map<Byte, byte[]> hashesByByteOut = hashesByTimestampThenByteOut.get(timestamp);
assertNotNull("timestamp entry missing", hashesByByteOut);
Set<Byte> leadingBytesIn = hashesByByteIn.keySet();
Set<Byte> leadingBytesOut = hashesByByteOut.keySet();
assertEquals("leading byte entry count mismatch", leadingBytesOut.size(), leadingBytesIn.size());
assertTrue("leading byte entry mismatch", leadingBytesIn.containsAll(leadingBytesOut));
for (Byte leadingByte : leadingBytesOut) {
byte[] bytesIn = hashesByByteIn.get(leadingByte);
byte[] bytesOut = hashesByByteOut.get(leadingByte);
assertTrue("pubkey hash mismatch", Arrays.equals(bytesOut, bytesIn));
}
}
}
private List<OnlineAccountData> generateOnlineAccounts(boolean withSignatures) {
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
int numTimestamps = RANDOM.nextInt(2) + 1; // 1 or 2
for (int t = 0; t < numTimestamps; ++t) {
long timestamp = 1 << 31 + (t + 1) << 12;
int numAccounts = RANDOM.nextInt(3000);
for (int a = 0; a < numAccounts; ++a) {
byte[] sig = null;
if (withSignatures) {
sig = new byte[Transformer.SIGNATURE_LENGTH];
RANDOM.nextBytes(sig);
}
byte[] pubkey = new byte[Transformer.PUBLIC_KEY_LENGTH];
RANDOM.nextBytes(pubkey);
onlineAccounts.add(new OnlineAccountData(timestamp, sig, pubkey));
}
}
return onlineAccounts;
}
}

View File

@@ -0,0 +1,89 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 },
{ "height": 2, "target": 70000, "deviation": 10000, "power": 0.8 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -0,0 +1,92 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 0,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,87 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 20 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 1,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 6,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 20 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,93 @@
{
"isTestChain": true,
"blockTimestampMargin": 500,
"transactionExpiryPeriod": 86400000,
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.1",
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 1655460000000, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
],
"ciyamAtSettings": {
"feePerStep": "0.0001",
"maxStepsPerRound": 500,
"stepsPerFunctionCall": 10,
"minutesPerBlock": 1
},
"featureTriggers": {
"messageHeight": 0,
"atHeight": 0,
"assetsTimestamp": 0,
"votingTimestamp": 0,
"arbitraryTimestamp": 0,
"powfixTimestamp": 0,
"qortalTimestamp": 0,
"newAssetPricingTimestamp": 0,
"groupApprovalTimestamp": 0,
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 0,
"calcChainWeightTimestamp": 0,
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,
"timestamp": 0,
"transactions": [
{ "type": "ISSUE_ASSET", "assetName": "QORT", "description": "QORT native coin", "data": "", "quantity": 0, "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "assetName": "Legacy-QORA", "description": "Representative legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "ISSUE_ASSET", "assetName": "QORT-from-QORA", "description": "QORT gained from holding legacy QORA", "quantity": 0, "isDivisible": true, "data": "{}", "isUnspendable": true },
{ "type": "GENESIS", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "amount": "1000000000" },
{ "type": "GENESIS", "recipient": "QixPbJUwsaHsVEofJdozU9zgVqkK6aYhrK", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "QaUpHNhT3Ygx6avRiKobuLdusppR5biXjL", "amount": "1000000" },
{ "type": "GENESIS", "recipient": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "amount": "1000000" },
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ACCOUNT_FLAGS", "target": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "andMask": -1, "orMask": 1, "xorMask": 0 },
{ "type": "REWARD_SHARE", "minterPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "recipient": "QgV4s3xnzLhVBEJxcYui4u4q11yhUHsd9v", "rewardSharePublicKey": "7PpfnvLSG7y4HPh8hE7KoqAjLCkv7Ui6xw4mKAkbZtox", "sharePercent": "100" },
{ "type": "ACCOUNT_LEVEL", "target": "Qci5m9k4rcwe4ruKrZZQKka4FzUUMut3er", "level": 5 }
]
}
}

View File

@@ -10,24 +10,31 @@
],
"requireGroupForApproval": false,
"minAccountLevelToRewardShare": 5,
"maxRewardSharesPerMintingAccount": 20,
"maxRewardSharesPerFounderMintingAccount": 6,
"maxRewardSharesByTimestamp": [
{ "timestamp": 0, "maxShares": 6 },
{ "timestamp": 9999999999999, "maxShares": 3 }
],
"founderEffectiveMintingLevel": 10,
"onlineAccountSignaturesMinLifetime": 3600000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 100 },
{ "height": 11, "reward": 10 },
{ "height": 21, "reward": 1 }
],
"sharesByLevel": [
{ "levels": [ 1, 2 ], "share": 0.05 },
{ "levels": [ 3, 4 ], "share": 0.10 },
{ "levels": [ 5, 6 ], "share": 0.15 },
{ "levels": [ 7, 8 ], "share": 0.20 },
{ "levels": [ 9, 10 ], "share": 0.25 }
{ "id": 1, "levels": [ 1, 2 ], "share": 0.05 },
{ "id": 2, "levels": [ 3, 4 ], "share": 0.10 },
{ "id": 3, "levels": [ 5, 6 ], "share": 0.15 },
{ "id": 4, "levels": [ 7, 8 ], "share": 0.20 },
{ "id": 5, "levels": [ 9, 10 ], "share": 0.25 }
],
"qoraHoldersShare": 0.20,
"qoraPerQortReward": 250,
"minAccountsToActivateShareBin": 30,
"shareBinActivationMinLevel": 7,
"blocksNeededByLevel": [ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100 ],
"blockTimingsByHeight": [
{ "height": 1, "target": 60000, "deviation": 30000, "power": 0.2 }
@@ -51,9 +58,12 @@
"atFindNextTransactionFix": 0,
"newBlockSigHeight": 999999,
"shareBinFix": 999999,
"rewardShareLimitTimestamp": 9999999999999,
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
},
"genesisInfo": {
"version": 4,

View File

@@ -0,0 +1,19 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-block-timestamps.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-",
"dataPath": "data-test",
"tempDataPath": "data-test/_temp",
"listsPath": "lists-test",
"storagePolicy": "FOLLOWED_OR_VIEWED",
"maxStorageCapacity": 104857600
}

View File

@@ -0,0 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-disable-reference.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -0,0 +1,19 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-no-sig-agg.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-",
"dataPath": "data-test",
"tempDataPath": "data-test/_temp",
"listsPath": "lists-test",
"storagePolicy": "FOLLOWED_OR_VIEWED",
"maxStorageCapacity": 104857600
}

View File

@@ -0,0 +1,19 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-reward-shares.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-",
"dataPath": "data-test",
"tempDataPath": "data-test/_temp",
"listsPath": "lists-test",
"storagePolicy": "FOLLOWED_OR_VIEWED",
"maxStorageCapacity": 104857600
}