Compare commits

...

75 Commits

Author SHA1 Message Date
CalDescent
9255df46cf Script updates to support add/remove dev group admins 2022-11-06 19:46:12 +00:00
CalDescent
818e037e75 Merge branch 'master' into null-owned-groups 2022-11-06 13:08:54 +00:00
CalDescent
9c68f1038a Bump AT version to 1.4.0 2022-11-05 14:02:04 +00:00
CalDescent
10ae383bb6 Merge pull request #102 from catbref/faster-qort-buy-PoC
Proof of concept: speed up QORT buying
2022-11-01 18:55:21 +00:00
catbref
aead9cfcbf Proof of concept: speed up QORT buying
When users buy QORT ("Alice"-side), most of the API time is spent computing mempow for the MESSAGE sent to Bob's AT.
This is the final stage startResponse() and after Alice's P2SH is already broadcast.

To speed this up, the MESSAGE part is moved into its own thread allowing startResponse() to return sooner, improving the user experience.

Caveats:
If MESSAGE importAsUnconfirmed() somehow fails the the buy won't complete and Alice will have to wait for P2SH refund.
If Alice shuts down her node while MESSAGE mempow is being computed then it's possible the shutdown will be blocked until mempow is complete.

Currently only implemented in LitecoinACCTv3TradeBot as this is only proof-of-concept.
Tested with multiple buys in the same block.
2022-11-01 08:55:57 +00:00
CalDescent
985c195e9e Added GIF_REPOSITORY, with custom validation function and unit tests. 2022-10-30 17:33:21 +00:00
CalDescent
0628847d14 Removed QORTAL_METADATA service tests. 2022-10-30 17:25:11 +00:00
CalDescent
4043ae1928 Added QCHAT_IMAGE service (with 500KB file size limit). 2022-10-30 17:23:46 +00:00
CalDescent
fa80c83864 Remove QORTAL_METADATA service as this uses its own protocol instead. 2022-10-30 17:07:56 +00:00
CalDescent
f739d8f5c6 Added increaseOnlineAccountsDifficultyTimestamp feature trigger to unit tests. 2022-10-28 18:06:34 +01:00
CalDescent
166425bee9 Added feature trigger timestamp (TBC) to increase online accounts mempow difficulty (also TBC). 2022-10-28 17:20:39 +01:00
CalDescent
59a804c560 Include "blocks remaining" in systray when syncing from more than 60 minutes away from a peer's chain tip. 2022-10-28 16:57:52 +01:00
CalDescent
b64c053531 Reuse the work buffer when verifying online accounts from the OnlineAccountsManager import queue.
This is a hopeful fix for extra memory usage since mempow activated, due to adding a lot of load to the garbage collector. It only applies to accounts verified from the import queue; the optimization hasn't been applied to block processing. But verifying online accounts when processing blocks is rare and generally would only last a short amount of time.
2022-10-28 16:54:53 +01:00
CalDescent
30cd56165a Speed up syncing blocks in the range of 1-12 hours ago by caching the valid online accounts. 2022-10-28 16:02:25 +01:00
CalDescent
510328db47 Removed unused timestamp value. 2022-10-28 15:50:43 +01:00
CalDescent
f83d4bac7b Reduced online accounts mempow difficulty to 5 on testnets.
This allows testnets to more easily coexist on the same machines that are running a mainnet instance, and still tests the mempow computation and verification in a non-resource-intensive way.
2022-10-23 17:01:58 +01:00
CalDescent
b3273ff01a Removed all mempow feature trigger conditionals.
We no longer need all the code complexity, now that 24 hours have passed since activation. We don't validate online accounts beyond 12 hours, and the data is trimmed after 24 hours.
2022-10-23 16:47:42 +01:00
CalDescent
1d5497e484 Modifications to support a single node testnet:
- Added "singleNodeTestnet" setting, allowing for fast and consecutive block minting, and no requirement for a minimum number of peers.
- Added "recoveryModeTimeout" setting (previously hardcoded in Synchronizer).
- Updated testnets documentation to include new settings and a quick start guide.
- Added "generic" minting account that can be used in testnets (not functional on mainnet), to simplify the process for new devs.
2022-10-23 14:13:38 +01:00
CalDescent
b37aa749c6 Removed onlineAccountsMemPoWEnabled setting as it's no longer needed. 2022-10-22 19:34:24 +01:00
CalDescent
e45ad37eb5 Fixed bug which could prevent invalid accounts being removed from the queue until the next valid one is added. 2022-10-22 19:30:08 +01:00
CalDescent
72985b1fc6 Reduce log spam, especially around the time of node startup before online accounts have been retrieved.
We expect a "Couldn't build a to-be-minted block" log on every startup due to trying to mint before having any accounts. This one has moved from error to info level because error logs can be quite intrusive when using an IDE.
2022-10-22 19:24:54 +01:00
CalDescent
6f27d3798c Improved online accounts processing, to avoid creating keys in the map before validation. 2022-10-22 19:18:41 +01:00
CalDescent
57125a91cf Bump version to 3.6.4 2022-10-15 18:59:42 +01:00
CalDescent
3c565638c1 onlineAccountsMemoryPoWTimestamp set to Sat Oct 22 2022 16:00:00 UTC 2022-10-15 18:58:13 +01:00
CalDescent
c2d02aead9 Default minPeerVersion set to 3.6.3 2022-10-14 18:44:25 +01:00
CalDescent
0d9aafaf4e Reduced log spam 2022-10-14 17:03:10 +01:00
CalDescent
3844358380 Mark a peer as misbehaved if it fails to respond with a usable block 3 times in a row.
This should help to workaround deserialization and missing response issues.
2022-10-14 16:38:05 +01:00
CalDescent
b4125d2bf1 Fix for NPE in verifyMemoryPoW() 2022-10-14 11:34:46 +01:00
CalDescent
5c223179ed Updated AdvancedInstaller project for v3.6.3 2022-10-13 23:37:21 +01:00
CalDescent
f3cb57417a Merge branch 'master' of github.com:Qortal/qortal 2022-10-13 23:36:27 +01:00
CalDescent
7c7f071eba Bump version to 3.6.3 2022-10-12 08:54:27 +01:00
CalDescent
7c15d88cbc Fix for issue in BLOCK_SUMMARIES_V2 when sending an empty array of summaries.
The BLOCK_SUMMARIES message type would differentiate between an empty response and a missing/invalid response. However, in V2, a response with empty summaries would throw a BufferUnderflowException and be treated by the caller as a null message.

This caused problems when trying to find a common block with peers that have diverged by more than 8 blocks. With V1 the caller would know to search back further (e.g. 16 blocks) but in V2 it was treated as "no response" and so the caller would give up instead of increasing the look-back threshold.

This fix will identify BLOCK_SUMMARIES_V2 messages with no content, and return an empty array of block summaries instead of a null message.

Should be enough to recover any stuck nodes, as long as they haven't diverged more than 240 blocks from the main chain.
2022-10-12 08:52:58 +01:00
CalDescent
d4aaba2293 Bump version to 3.6.2 2022-10-10 19:06:08 +01:00
CalDescent
10d3176e70 Revert "Always use BlockSummariesMessage V1 (instead of V2) when responding to GetBlockSummaries requests."
This reverts commit 2d58118d7c.
2022-10-10 10:28:44 +01:00
CalDescent
36fcd6792a Discard BLOCK_SUMMARIES_V2 messages with an ID (thanks to @catbref for the code)
This is a better fix for the "contaminated chain tip summaries" issue. Need to reduce the logging level to debug before release.
2022-10-10 10:28:36 +01:00
CalDescent
cb1eee8ff5 GenericUnknownMessage.MINIMUM_PEER_VERSION set to 3.6.1.
This should ideally have been set in the 3.6.1 release, but not setting it is unlikely to have caused any problems.
2022-10-09 20:37:39 +01:00
CalDescent
2d58118d7c Always use BlockSummariesMessage V1 (instead of V2) when responding to GetBlockSummaries requests.
This should hopefully fix a potential issue where peer's chain tip data becomes contaminated with other summary data, causing incorrect sync decisions.
2022-10-09 20:11:01 +01:00
CalDescent
e6bb0b81cf Revert "Reduce INITIAL_BLOCK_STEP from 8 to 7."
This reverts commit 0088ba8485.
2022-10-09 19:11:20 +01:00
CalDescent
77d60fc33f Revert "Skip GET_BLOCK_SUMMARIES requests if it can already be fulfilled entirely from the peer's chain tip block summaries cache."
This reverts commit 8cedf618f4.
2022-10-09 14:11:28 +01:00
CalDescent
504f38b42a Merge pull request #97 from Nuc1eoN/patch-1
Mark start/stop scripts as executables
2022-10-08 19:49:10 +01:00
Nuc1eoN
3a18599d85 Mark start/stop scripts as executables
The `start.sh` & `stop.sh` scripts have already been marked as executables in the source folder... But since we have only piped their contents, we need to set correct file permissions again.
2022-10-07 23:35:35 +02:00
CalDescent
0088ba8485 Reduce INITIAL_BLOCK_STEP from 8 to 7.
This allows the first pass to always be served from the peer's cache of 8 summaries. This allows a maximum of 7 to be returned, because the 8th spot is needed for the parent block's signature.
2022-10-07 14:47:46 +01:00
CalDescent
8cedf618f4 Skip GET_BLOCK_SUMMARIES requests if it can already be fulfilled entirely from the peer's chain tip block summaries cache.
Loading from the cache should speed up sync decisions, particularly when choose which peer to sync from. The greater the number of connected peers, the more significant this optimization will be. It should also reduce wasted network requests and data usage.

Adding this check prior to making a network request is a simple way to introduce the new cached summaries from BLOCK_SUMMARIES_V2 without having to rewrite a lot of the complex sync / peer comparison logic. Longer term we may want to rewrite that logic to read from the cache directly, but it doesn't make sense to introduce that level of risk at this point time, especially as the Synchronizer may be rewritten soon to prefer longer chains.

Even so, this is still quite a high risk commit so lots of testing will be needed.
2022-10-07 14:46:09 +01:00
CalDescent
fdd95eac56 Limit to 240 blocks in syncToPeerChain().
Should fix OutOfMemoryException often seen when syncing from 1000+ blocks behind the chain tip.
2022-10-07 11:05:24 +01:00
CalDescent
10b0f0a054 Catch JSON exceptions in PirateChainWalletController.
This could prevent additional wallets from being initialized if connection was lost while syncing an existing one.
2022-10-05 15:29:29 +01:00
CalDescent
1233ba6703 Bump version to 3.6.1 2022-10-04 20:08:30 +01:00
CalDescent
c35c7180d4 Return empty levels in GET /addresses/online/levels 2022-10-03 10:58:47 +01:00
CalDescent
7080b55aac Reintroduced initial sleep period in block archiver. 2022-09-25 19:43:56 +01:00
CalDescent
3890fa8490 Renamed constant for consistency 2022-09-25 18:46:33 +01:00
CalDescent
a9721bab3d Fixed issue causing startup of various components to be delayed by 30 seconds. 2022-09-25 18:39:56 +01:00
CalDescent
1bb8f1b6d2 Fixed bug in last commit.
We need to track items to remove separately from items to add, otherwise invalid accounts remain in the queue.
2022-09-25 12:36:00 +01:00
CalDescent
765416db71 Yet another attempt to optimize the online accounts import queue processing.
The main difference here is that we now remove items from the onlineAccountsImportQueue in a batch, _after_ they have been imported. This prevents duplicates from being added to the queue in the previous time gap between them being removed and imported.
2022-09-25 12:26:00 +01:00
CalDescent
5989473c8a Revert "Allow duplicate variations of each OnlineAccountData in the import queue, but don't allow two entries that match exactly."
This reverts commit 6d9e6e8d4c.
2022-09-25 12:06:14 +01:00
CalDescent
aa9da45c01 Added optional filtering by reference in GET /chat/messages 2022-09-25 11:38:17 +01:00
CalDescent
4681218416 Include total count in debug trade presence logging 2022-09-24 15:49:29 +01:00
CalDescent
5c746f0bd9 Fixed bug which required a node to hold local trade presences before it would request any.
This caused large gaps with no presence data. They are removed when they expire, causing the local count to drop to zero, and the node would only start requesting them again once a peer had pushed one or more entries proactively.
2022-09-24 15:48:45 +01:00
CalDescent
309f27a6b8 Moved error to debug, as we now get a burst of these soon after startup, due to commit 99858f3.
This also shows that commit 99858f3 now prevents a block candidate with a very small number of online accounts being built immediately after startup.
2022-09-24 15:21:01 +01:00
CalDescent
d2ebb215e6 Fixed Synchronizer.getBlockSummaries() which was expecting BLOCK_SUMMARIES, but updated peers send BLOCK_SUMMARIES_V2 2022-09-24 14:36:49 +01:00
CalDescent
7a60f713ea Fixed error in rebase. 2022-09-24 14:35:02 +01:00
CalDescent
e80dd31fb4 BlockSummariesV2Message.MINIMUM_PEER_VERSION set to 3.6.1 2022-09-24 13:53:27 +01:00
catbref
94cdc10151 Initial work on BLOCK_SUMMARIES_V2, part of a bigger arc to improve synchronization.
Touches quite a few files because:

* Deprecate HEIGHT_V2 because it doesn't contain enough info to be fully useful during sync.
Newer peers will re-use BLOCK_SUMMARIES_V2.

* For newer peers, instead of sending / broadcasting HEIGHT_V2,
send top N block summaries instead, to avoid requests for minor reorgs.

* When responding to GET_BLOCK, and we don't actually have the requested block,
we currently send an empty BLOCK_SUMMARIES message instead of not responding,
which would cause a slow timeout in Synchronizer.

This pattern has spread to other network message response code,
so now we introduce a generic 'unknown' message type for all these cases.

* Remove PeerChainTipData class entirely and re-use BlockSummaryData instead.

* Each Peer instance used to hold PeerChainTipData - essentially single latest block summary - but now holds a List of latest block summaries.

* PeerChainTipData getter/setter methods modified for compatibility at this point in time.

* Repository methods that return BlockSummaryData (or lists of) now try to fully populate them,
including newly added block reference field.

* Re-worked Peer.canUseCommonBlockData() to be more readable

* Cherry-picked patch to Message.fromByteBuffer() to pass an empty, read-only ByteBuffer to subclass fromByteBuffer() methods, instead of null.
This allows natural use of BufferUnderflowException if a subclass tries to use read(), or hasRemaining(), etc. from an empty data-payload message.
Previously this could have caused an NPE.
2022-09-24 13:48:01 +01:00
CalDescent
863a5eff97 Moved various online accounts logs to TRACE level, to make it easier to monitor the queue processing when in DEBUG. 2022-09-24 13:11:28 +01:00
CalDescent
5b81b30974 Modified online accounts request interval, and introduced bursting.
It will now request online accounts every 1 minute instead of every 5 seconds, except for the first 5 minutes following a new online accounts timestamp, in which it will request every 5 seconds (referred to as the "burst" interval). It will also use the burst interval for the first 5 minutes after the node starts.

This is based on the idea that most online accounts arrive soon after a new timestamp begins, and so there is no need to request accounts so frequently after that. This should reduce data usage by a significant amount.

Once mempow is fully rolled out, the "burst" feature can be reduced or removed, since online accounts will be sent ahead of time, generally 15-30 mins prior to the new online accounts timestamp becoming active.
2022-09-24 13:02:27 +01:00
CalDescent
174a779e4c Add accounts from the import queue individually, and then skip future duplicates before unnecessarily validating them again.
This closes a gap where accounts would be moved from onlineAccountsImportQueue to onlineAccountsToAdd, but not yet imported. During this time, there was nothing to stop them from being added to the import queue again, causing duplicate validations.
2022-09-24 10:56:52 +01:00
CalDescent
c7cf33ef78 Set hasOurOnlineAccounts to true if one of our accounts is found before signing. 2022-09-24 10:23:55 +01:00
CalDescent
ea4f4d949b When validating online accounts, enforce mempow if the online account's timestamp is after the feature trigger. 2022-09-23 19:45:59 +01:00
CalDescent
6d9e6e8d4c Allow duplicate variations of each OnlineAccountData in the import queue, but don't allow two entries that match exactly. 2022-09-23 18:46:01 +01:00
CalDescent
99858f3781 Wait 30 seconds after the node starts before computing our online accounts.
This allows some time for initial online account lists to be retrieved, and reduces the chances of the same nonce being computed twice.
2022-09-23 18:28:41 +01:00
CalDescent
84a16157d1 Don't add online accounts to the import queue if they are already validated 2022-09-23 18:02:46 +01:00
CalDescent
49d83650f4 Removed online accounts V2 and V1 messaging, as the V3 format will soon be required due to the nonce values. 2022-09-23 15:25:44 +01:00
CalDescent
951c85faf1 Fixed bug causing error 500 in some cases. 2022-09-20 22:26:30 +01:00
CalDescent
84d42b93e1 Reordered code in Block.mint() to fix potential issue after mempow activates. 2022-09-20 08:50:37 +01:00
CalDescent
93fd80e289 Require that add/remove admin transactions can only be created by group members.
For regular groups, we require that the owner adds/removes the admins, so group membership is adequately checked. However for null-owned groups this check is skipped. So we need an additional condition to prevent non-group members from issuing a transaction for approval by the group admins.
2022-09-19 16:34:31 +01:00
CalDescent
5581b83c57 Added initial admin approval features for groups owned by the null account.
* The dev group (ID 1) is owned by the null account with public key 11111111111111111111111111111111
 * To regain access to otherwise blocked owner-based rules, it has different validation logic
 * which applies to groups with this same null owner.
 *
 * The main difference is that approval is required for certain transaction types relating to
 * null-owned groups. This allows existing admins to approve updates to the group (using group's
 * approval threshold) instead of these actions being performed by the owner.
 *
 * Since these apply to all null-owned groups, this allows anyone to update their group to
 * the null owner if they want to take advantage of this decentralized approval system.
 *
 * Currently, the affected transaction types are:
 * - AddGroupAdminTransaction
 * - RemoveGroupAdminTransaction
 *
 * This same approach could ultimately be applied to other group transactions too.
2022-09-19 11:03:06 +01:00
CalDescent
73396490ba Set walletsPath and listsPath to AppData folder for new Windows installs. 2022-08-27 19:44:31 +01:00
77 changed files with 1474 additions and 610 deletions

View File

@@ -52,14 +52,13 @@
## Single-node testnet
A single-node testnet is possible with code modifications, for basic testing, or to more easily start a new testnet.
To do so, follow these steps:
- Comment out the `if (mintedLastBlock) { }` conditional in BlockMinter.java
- Comment out the `minBlockchainPeers` validation in Settings.validate()
- Set `minBlockchainPeers` to 0 in settings.json
- Set `Synchronizer.RECOVERY_MODE_TIMEOUT` to `0`
- All other steps should remain the same. Only a single reward share key is needed.
- Remember to put these values back after introducing other nodes
A single-node testnet is possible with an additional settings, or to more easily start a new testnet.
Just add this setting:
```
"singleNodeTestnet": true
```
This will automatically allow multiple consecutive blocks to be minted, as well as setting minBlockchainPeers to 0.
Remember to put these values back after introducing other nodes
## Fixed network
@@ -93,3 +92,32 @@ Your options are:
- `qort` tool, but prepend with one-time shell variable: `BASE_URL=some-node-hostname-or-ip:port qort ......`
- `peer-heights`, but use `-t` option, or `BASE_URL` shell variable as above
## Example settings-test.json
```
{
"isTestNet": true,
"bitcoinNet": "TEST3",
"repositoryPath": "db-testnet",
"blockchainConfig": "testchain.json",
"minBlockchainPeers": 1,
"apiDocumentationEnabled": true,
"apiRestricted": false,
"bootstrap": false,
"maxPeerConnectionTime": 999999999,
"localAuthBypassEnabled": true,
"singleNodeTestnet": true,
"recoveryModeTimeout": 0
}
```
## Quick start
Here are some steps to quickly get a single node testnet up and running with a generic minting account:
1. Start with template `settings-test.json`, and create a `testchain.json` based on mainnet's blockchain.json (or obtain one from Qortal developers). These should be in the same directory as the jar.
2. Make sure feature triggers and other timestamp/height activations are correctly set. Generally these would be `0` so that they are enabled from the start.
3. Set a recent genesis `timestamp` in testchain.json, and add this reward share entry:
`{ "type": "REWARD_SHARE", "minterPublicKey": "DwcUnhxjamqppgfXCLgbYRx8H9XFPUc2qYRy3CEvQWEw", "recipient": "QbTDMss7NtRxxQaSqBZtSLSNdSYgvGaqFf", "rewardSharePublicKey": "CRvQXxFfUMfr4q3o1PcUZPA4aPCiubBsXkk47GzRo754", "sharePercent": 0 },`
4. Start the node, passing in settings-test.json, e.g: `java -jar qortal.jar settings-test.json`
5. Once started, add the corresponding minting key to the node:
`curl -X POST "http://localhost:62391/admin/mintingaccounts" -d "F48mYJycFgRdqtc58kiovwbcJgVukjzRE4qRRtRsK9ix"`
6. Alternatively you can use your own minting account instead of the generic one above.
7. After a short while, blocks should be minted from the genesis timestamp until the current time.

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{E5597539-098E-4BA6-99DF-4D22018BC0D3} 1049:{2B5E55A2-142A-4BED-B3B9-5657162282B7} 2052:{6F19171F-4743-4127-B191-AAFA3FA885D2} 2057:{A1B3108D-EC5D-47A1-AEE4-DBD956E682FB} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{ADE0C9E9-F7D9-4829-8626-8571C735C4D7} 1049:{F5230C0A-9D8C-4C70-AC72-17CECC8273B8} 2052:{D5A0760C-E5B3-4C4C-97B0-81CC445F07B9} 2057:{EF5EF0BE-0B00-4F5C-A2A0-DF2CB82FF20D} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="3.4.3" Type="32"/>
<ROW Property="ProductVersion" Value="3.6.3" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{F17029E8-CCC4-456D-B4AC-1854C81C46B6}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{F4F774B9-18DC-4740-9552-EA16B98801C9}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>
@@ -1173,7 +1173,7 @@
<ROW Action="AI_STORE_LOCATION" Type="51" Source="ARPINSTALLLOCATION" Target="[APPDIR]"/>
<ROW Action="AI_SetPermissions" Type="11265" Source="userAccounts.dll" Target="OnSetPermissions" WithoutSeq="true"/>
<ROW Action="CustomizeLog4j2PropertiesScript" Type="3109" Target="Script Text" TargetUnformatted="var actionData = Session.Property(&quot;CustomActionData&quot;);&#13;&#10;var actionDataArray = actionData.split(&quot;|&quot;);&#13;&#10;var appDir = actionDataArray[0];&#13;&#10;var dataFolder = actionDataArray[1] + actionDataArray[2] + &quot;\\&quot;;&#13;&#10;&#13;&#10;var ForReading = 1, ForWriting = 2, ForAppending = 8;&#13;&#10;var fso = new ActiveXObject(&quot;Scripting.FileSystemObject&quot;);&#13;&#10;&#13;&#10;// Make copy&#13;&#10;fso.CopyFile(appDir + &quot;log4j2.properties&quot;, appDir + &quot;log4j2-orig.properties&quot;, true); // overwrite&#13;&#10;&#13;&#10;// Rewrite %AppDir%\log4j2.properties to update logfile storage path&#13;&#10;var fin = fso.OpenTextFile(appDir + &quot;log4j2-orig.properties&quot;, ForReading, false); // no create&#13;&#10;var fout = fso.OpenTextFile(appDir + &quot;log4j2.properties&quot;, ForWriting, true); // can create&#13;&#10;&#13;&#10;// Copy lines with rewriting where necessary&#13;&#10;while( !fin.AtEndOfStream ) {&#13;&#10;&#9;var line = fin.ReadLine();&#13;&#10;&#13;&#10;&#9;var start = line.indexOf(&quot;property.dirname&quot;);&#13;&#10;&#9;if (start &gt; 0) {&#13;&#10;&#9;&#9;// line: # property.dirname = ...appdata...&#13;&#10;&#9;&#9;// uncomment/replace this line for Windows&#13;&#10;&#9;&#9;fout.WriteLine( &quot;property.dirname = &quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) );&#13;&#10;&#9;} else {&#13;&#10;&#9;&#9;// not found - output verbatim&#13;&#10;&#9;&#9;fout.WriteLine( line );&#13;&#10;&#9;}&#13;&#10;}&#13;&#10;&#13;&#10;fin.Close();&#13;&#10;fout.Close();&#13;&#10;" AdditionalSeq="AI_DATA_SETTER_4"/>
<ROW Action="CustomizeSettingsJsonScript" Type="3109" Target="Script Text" TargetUnformatted="var actionData = Session.Property(&quot;CustomActionData&quot;);&#13;&#10;var actionDataArray = actionData.split(&quot;|&quot;);&#13;&#10;var appDir = actionDataArray[0];&#13;&#10;var dataFolder = actionDataArray[1] + actionDataArray[2] + &quot;\\&quot;;&#13;&#10;&#13;&#10;var ForReading = 1, ForWriting = 2, ForAppending = 8;&#13;&#10;var fso = new ActiveXObject(&quot;Scripting.FileSystemObject&quot;);&#13;&#10;&#13;&#10;// Create basic %APPDIR%\settings.json with path to real settings.json in dataFolder&#13;&#10;var fts = fso.OpenTextFile(appDir + &quot;settings.json&quot;, ForWriting, true);&#13;&#10;&#13;&#10;fts.WriteLine( &quot;{&quot; );&#13;&#10;// We need to escape Windows path backslashes to keep JSON valid&#13;&#10;fts.WriteLine( &quot; \&quot;userPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;\&quot;&quot; );&#13;&#10;fts.WriteLine( &quot;}&quot; );&#13;&#10;&#13;&#10;fts.Close();&#13;&#10;&#13;&#10;// Make copy&#13;&#10;fso.CopyFile(dataFolder + &quot;settings.json&quot;, dataFolder + &quot;settings-orig.json&quot;, true); // overwrite&#13;&#10;&#13;&#10;// Rewrite settings.json to update repository path&#13;&#10;var fin = fso.OpenTextFile(dataFolder + &quot;settings-orig.json&quot;, ForReading, false);&#13;&#10;var fout = fso.OpenTextFile(dataFolder + &quot;settings.json&quot;, ForWriting, true);&#13;&#10;&#13;&#10;// First line should contain opening brace&#13;&#10;fout.WriteLine( fin.ReadLine() );&#13;&#10;&#13;&#10;// Append our entries&#13;&#10;fout.WriteLine( &quot; \&quot;repositoryPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;db\&quot;,&quot; );&#13;&#10;fout.WriteLine( &quot; \&quot;dataPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;data\&quot;,&quot; );&#13;&#10;&#13;&#10;// copy rest of settings&#13;&#10;while( !fin.AtEndOfStream ) {&#13;&#10;&#9;fout.WriteLine( fin.ReadLine() );&#13;&#10;}&#13;&#10;&#13;&#10;fin.Close();&#13;&#10;fout.Close();&#13;&#10;" AdditionalSeq="AI_DATA_SETTER_3"/>
<ROW Action="CustomizeSettingsJsonScript" Type="3109" Target="Script Text" TargetUnformatted="var actionData = Session.Property(&quot;CustomActionData&quot;);&#13;&#10;var actionDataArray = actionData.split(&quot;|&quot;);&#13;&#10;var appDir = actionDataArray[0];&#13;&#10;var dataFolder = actionDataArray[1] + actionDataArray[2] + &quot;\\&quot;;&#13;&#10;&#13;&#10;var ForReading = 1, ForWriting = 2, ForAppending = 8;&#13;&#10;var fso = new ActiveXObject(&quot;Scripting.FileSystemObject&quot;);&#13;&#10;&#13;&#10;// Create basic %APPDIR%\settings.json with path to real settings.json in dataFolder&#13;&#10;var fts = fso.OpenTextFile(appDir + &quot;settings.json&quot;, ForWriting, true);&#13;&#10;&#13;&#10;fts.WriteLine( &quot;{&quot; );&#13;&#10;// We need to escape Windows path backslashes to keep JSON valid&#13;&#10;fts.WriteLine( &quot; \&quot;userPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;\&quot;&quot; );&#13;&#10;fts.WriteLine( &quot;}&quot; );&#13;&#10;&#13;&#10;fts.Close();&#13;&#10;&#13;&#10;// Make copy&#13;&#10;fso.CopyFile(dataFolder + &quot;settings.json&quot;, dataFolder + &quot;settings-orig.json&quot;, true); // overwrite&#13;&#10;&#13;&#10;// Rewrite settings.json to update repository path&#13;&#10;var fin = fso.OpenTextFile(dataFolder + &quot;settings-orig.json&quot;, ForReading, false);&#13;&#10;var fout = fso.OpenTextFile(dataFolder + &quot;settings.json&quot;, ForWriting, true);&#13;&#10;&#13;&#10;// First line should contain opening brace&#13;&#10;fout.WriteLine( fin.ReadLine() );&#13;&#10;&#13;&#10;// Append our entries&#13;&#10;fout.WriteLine( &quot; \&quot;repositoryPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;db\&quot;,&quot; );&#13;&#10;fout.WriteLine( &quot; \&quot;dataPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;data\&quot;,&quot; );&#13;&#10;fout.WriteLine( &quot; \&quot;walletsPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;wallets\&quot;,&quot; );&#13;&#10;fout.WriteLine( &quot; \&quot;listsPath\&quot;: \&quot;&quot; + dataFolder.split(&apos;\\&apos;).join(&apos;\\\\&apos;) + &quot;lists\&quot;,&quot; );&#13;&#10;&#13;&#10;// copy rest of settings&#13;&#10;while( !fin.AtEndOfStream ) {&#13;&#10;&#9;fout.WriteLine( fin.ReadLine() );&#13;&#10;}&#13;&#10;&#13;&#10;fin.Close();&#13;&#10;fout.Close();&#13;&#10;" AdditionalSeq="AI_DATA_SETTER_3"/>
<ROW Action="DetectRunningProcess" Type="1" Source="aicustact.dll" Target="DetectProcess" Options="3" AdditionalSeq="AI_DATA_SETTER_8"/>
<ROW Action="DetectW32Time" Type="1" Source="aicustact.dll" Target="DetectService" Options="3" AdditionalSeq="AI_DATA_SETTER_11"/>
<ROW Action="NTP_config" Type="3090" Source="ntpcfg.bat"/>

Binary file not shown.

View File

@@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>org.ciyam</groupId>
<artifactId>AT</artifactId>
<version>1.4.0</version>
<description>POM was created from install:install-file</description>
</project>

View File

@@ -3,14 +3,15 @@
<groupId>org.ciyam</groupId>
<artifactId>AT</artifactId>
<versioning>
<release>1.3.8</release>
<release>1.4.0</release>
<versions>
<version>1.3.4</version>
<version>1.3.5</version>
<version>1.3.6</version>
<version>1.3.7</version>
<version>1.3.8</version>
<version>1.4.0</version>
</versions>
<lastUpdated>20200925114415</lastUpdated>
<lastUpdated>20221105114346</lastUpdated>
</versioning>
</metadata>

View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>3.6.0</version>
<version>3.6.4</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
@@ -11,7 +11,7 @@
<bitcoinj.version>0.15.10</bitcoinj.version>
<bouncycastle.version>1.69</bouncycastle.version>
<build.timestamp>${maven.build.timestamp}</build.timestamp>
<ciyam-at.version>1.3.8</ciyam-at.version>
<ciyam-at.version>1.4.0</ciyam-at.version>
<commons-net.version>3.6</commons-net.version>
<commons-text.version>1.8</commons-text.version>
<commons-io.version>2.6</commons-io.version>

View File

@@ -1,7 +1,7 @@
package org.qortal.api.model;
import io.swagger.v3.oas.annotations.media.Schema;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.network.PeerData;
import org.qortal.network.Handshake;
import org.qortal.network.Peer;
@@ -63,11 +63,11 @@ public class ConnectedPeer {
this.age = "connecting...";
}
PeerChainTipData peerChainTipData = peer.getChainTipData();
BlockSummaryData peerChainTipData = peer.getChainTipData();
if (peerChainTipData != null) {
this.lastHeight = peerChainTipData.getLastHeight();
this.lastBlockSignature = peerChainTipData.getLastBlockSignature();
this.lastBlockTimestamp = peerChainTipData.getLastBlockTimestamp();
this.lastHeight = peerChainTipData.getHeight();
this.lastBlockSignature = peerChainTipData.getSignature();
this.lastBlockTimestamp = peerChainTipData.getTimestamp();
}
}

View File

@@ -205,6 +205,10 @@ public class AddressesResource {
try (final Repository repository = RepositoryManager.getRepository()) {
List<OnlineAccountLevel> onlineAccountLevels = new ArrayList<>();
// Prepopulate all levels
for (int i=0; i<=10; i++)
onlineAccountLevels.add(new OnlineAccountLevel(i, 0));
for (OnlineAccountData onlineAccountData : onlineAccounts) {
try {
final int minterLevel = Account.getRewardShareEffectiveMintingLevelIncludingLevelZero(repository, onlineAccountData.getPublicKey());

View File

@@ -69,6 +69,7 @@ public class ChatResource {
public List<ChatMessage> searchChat(@QueryParam("before") Long before, @QueryParam("after") Long after,
@QueryParam("txGroupId") Integer txGroupId,
@QueryParam("involving") List<String> involvingAddresses,
@QueryParam("reference") String reference,
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse) {
@@ -87,11 +88,16 @@ public class ChatResource {
if (after != null && after < 1500000000000L)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
byte[] referenceBytes = null;
if (reference != null)
referenceBytes = Base58.decode(reference);
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getChatRepository().getMessagesMatchingCriteria(
before,
after,
txGroupId,
referenceBytes,
involvingAddresses,
limit, offset, reverse);
} catch (DataException e) {

View File

@@ -46,6 +46,7 @@ public class ChatMessagesWebSocket extends ApiWebSocket {
null,
txGroupId,
null,
null,
null, null, null);
sendMessages(session, chatMessages);
@@ -72,6 +73,7 @@ public class ChatMessagesWebSocket extends ApiWebSocket {
null,
null,
null,
null,
involvingAddresses,
null, null, null);

View File

@@ -1,16 +1,18 @@
package org.qortal.arbitrary.misc;
import org.apache.commons.io.FilenameUtils;
import org.json.JSONObject;
import org.qortal.arbitrary.ArbitraryDataRenderer;
import org.qortal.transaction.Transaction;
import org.qortal.utils.FilesystemUtils;
import java.io.File;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import static java.util.Arrays.stream;
import static java.util.stream.Collectors.toMap;
@@ -38,6 +40,7 @@ public enum Service {
GIT_REPOSITORY(300, false, null, null),
IMAGE(400, true, 10*1024*1024L, null),
THUMBNAIL(410, true, 500*1024L, null),
QCHAT_IMAGE(420, true, 500*1024L, null),
VIDEO(500, false, null, null),
AUDIO(600, false, null, null),
BLOG(700, false, null, null),
@@ -48,7 +51,30 @@ public enum Service {
PLAYLIST(910, true, null, null),
APP(1000, false, null, null),
METADATA(1100, false, null, null),
QORTAL_METADATA(1111, true, 10*1024L, Arrays.asList("title", "description", "tags"));
GIF_REPOSITORY(1200, true, 25*1024*1024L, null) {
@Override
public ValidationResult validate(Path path) {
// Custom validation function to require .gif files only, and at least 1
int gifCount = 0;
File[] files = path.toFile().listFiles();
if (files != null) {
for (File file : files) {
if (file.isDirectory()) {
return ValidationResult.DIRECTORIES_NOT_ALLOWED;
}
String extension = FilenameUtils.getExtension(file.getName()).toLowerCase();
if (!Objects.equals(extension, "gif")) {
return ValidationResult.INVALID_FILE_EXTENSION;
}
gifCount++;
}
}
if (gifCount == 0) {
return ValidationResult.MISSING_DATA;
}
return ValidationResult.OK;
}
};
public final int value;
private final boolean requiresValidation;
@@ -114,7 +140,10 @@ public enum Service {
OK(1),
MISSING_KEYS(2),
EXCEEDS_SIZE_LIMIT(3),
MISSING_INDEX_FILE(4);
MISSING_INDEX_FILE(4),
DIRECTORIES_NOT_ALLOWED(5),
INVALID_FILE_EXTENSION(6),
MISSING_DATA(7);
public final int value;

View File

@@ -366,18 +366,14 @@ public class Block {
long timestamp = calcTimestamp(parentBlockData, minter.getPublicKey(), minterLevel);
long onlineAccountsTimestamp = OnlineAccountsManager.getCurrentOnlineAccountTimestamp();
// Fetch our list of online accounts
// Fetch our list of online accounts, removing any that are missing a nonce
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts(onlineAccountsTimestamp);
onlineAccounts.removeIf(a -> a.getNonce() == null || a.getNonce() < 0);
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
LOGGER.debug("No online accounts - not even our own?");
return null;
}
// If mempow is active, remove any legacy accounts that are missing a nonce
if (timestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
onlineAccounts.removeIf(a -> a.getNonce() == null || a.getNonce() < 0);
}
// Load sorted list of reward share public keys into memory, so that the indexes can be obtained.
// This is up to 100x faster than querying each index separately. For 4150 reward share keys, it
// was taking around 5000ms to query individually, vs 50ms using this approach.
@@ -411,29 +407,27 @@ public class Block {
// Aggregated, single signature
byte[] onlineAccountsSignatures = Qortal25519Extras.aggregateSignatures(signaturesToAggregate);
// Add nonces to the end of the online accounts signatures if mempow is active
if (timestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
try {
// Create ordered list of nonce values
List<Integer> nonces = new ArrayList<>();
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
nonces.add(onlineAccountData.getNonce());
}
// Encode the nonces to a byte array
byte[] encodedNonces = BlockTransformer.encodeOnlineAccountNonces(nonces);
// Append the encoded nonces to the encoded online account signatures
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
outputStream.write(onlineAccountsSignatures);
outputStream.write(encodedNonces);
onlineAccountsSignatures = outputStream.toByteArray();
}
catch (TransformationException | IOException e) {
return null;
// Add nonces to the end of the online accounts signatures
try {
// Create ordered list of nonce values
List<Integer> nonces = new ArrayList<>();
for (int i = 0; i < onlineAccountsCount; ++i) {
Integer accountIndex = accountIndexes.get(i);
OnlineAccountData onlineAccountData = indexedOnlineAccounts.get(accountIndex);
nonces.add(onlineAccountData.getNonce());
}
// Encode the nonces to a byte array
byte[] encodedNonces = BlockTransformer.encodeOnlineAccountNonces(nonces);
// Append the encoded nonces to the encoded online account signatures
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
outputStream.write(onlineAccountsSignatures);
outputStream.write(encodedNonces);
onlineAccountsSignatures = outputStream.toByteArray();
}
catch (TransformationException | IOException e) {
return null;
}
byte[] minterSignature = minter.sign(BlockTransformer.getBytesForMinterSignature(parentBlockData,
@@ -1046,14 +1040,9 @@ public class Block {
final int signaturesLength = Transformer.SIGNATURE_LENGTH;
final int noncesLength = onlineRewardShares.size() * Transformer.INT_LENGTH;
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
// We expect nonces to be appended to the online accounts signatures
if (this.blockData.getOnlineAccountsSignatures().length != signaturesLength + noncesLength)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
} else {
if (this.blockData.getOnlineAccountsSignatures().length != signaturesLength)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
}
// We expect nonces to be appended to the online accounts signatures
if (this.blockData.getOnlineAccountsSignatures().length != signaturesLength + noncesLength)
return ValidationResult.ONLINE_ACCOUNT_SIGNATURES_MALFORMED;
// Check signatures
long onlineTimestamp = this.blockData.getOnlineAccountsTimestamp();
@@ -1062,32 +1051,33 @@ public class Block {
byte[] encodedOnlineAccountSignatures = this.blockData.getOnlineAccountsSignatures();
// Split online account signatures into signature(s) + nonces, then validate the nonces
if (this.blockData.getTimestamp() >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
byte[] extractedSignatures = BlockTransformer.extract(encodedOnlineAccountSignatures, 0, signaturesLength);
byte[] extractedNonces = BlockTransformer.extract(encodedOnlineAccountSignatures, signaturesLength, onlineRewardShares.size() * Transformer.INT_LENGTH);
encodedOnlineAccountSignatures = extractedSignatures;
byte[] extractedSignatures = BlockTransformer.extract(encodedOnlineAccountSignatures, 0, signaturesLength);
byte[] extractedNonces = BlockTransformer.extract(encodedOnlineAccountSignatures, signaturesLength, onlineRewardShares.size() * Transformer.INT_LENGTH);
encodedOnlineAccountSignatures = extractedSignatures;
List<Integer> nonces = BlockTransformer.decodeOnlineAccountNonces(extractedNonces);
List<Integer> nonces = BlockTransformer.decodeOnlineAccountNonces(extractedNonces);
// Build block's view of online accounts (without signatures, as we don't need them here)
Set<OnlineAccountData> onlineAccounts = new HashSet<>();
for (int i = 0; i < onlineRewardShares.size(); ++i) {
Integer nonce = nonces.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
// Build block's view of online accounts (without signatures, as we don't need them here)
Set<OnlineAccountData> onlineAccounts = new HashSet<>();
for (int i = 0; i < onlineRewardShares.size(); ++i) {
Integer nonce = nonces.get(i);
byte[] publicKey = onlineRewardShares.get(i).getRewardSharePublicKey();
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, null, publicKey, nonce);
onlineAccounts.add(onlineAccountData);
}
// Remove those already validated & cached by online accounts manager - no need to re-validate them
OnlineAccountsManager.getInstance().removeKnown(onlineAccounts, onlineTimestamp);
// Validate the rest
for (OnlineAccountData onlineAccount : onlineAccounts)
if (!OnlineAccountsManager.getInstance().verifyMemoryPoW(onlineAccount, this.blockData.getTimestamp()))
return ValidationResult.ONLINE_ACCOUNT_NONCE_INCORRECT;
OnlineAccountData onlineAccountData = new OnlineAccountData(onlineTimestamp, null, publicKey, nonce);
onlineAccounts.add(onlineAccountData);
}
// Remove those already validated & cached by online accounts manager - no need to re-validate them
OnlineAccountsManager.getInstance().removeKnown(onlineAccounts, onlineTimestamp);
// Validate the rest
for (OnlineAccountData onlineAccount : onlineAccounts)
if (!OnlineAccountsManager.getInstance().verifyMemoryPoW(onlineAccount, null))
return ValidationResult.ONLINE_ACCOUNT_NONCE_INCORRECT;
// Cache the valid online accounts as they will likely be needed for the next block
OnlineAccountsManager.getInstance().addBlocksOnlineAccounts(onlineAccounts, onlineTimestamp);
// Extract online accounts' timestamp signatures from block data. Only one signature if aggregated.
List<byte[]> onlineAccountsSignatures = BlockTransformer.decodeTimestampSignatures(encodedOnlineAccountSignatures);

View File

@@ -73,7 +73,8 @@ public class BlockChain {
calcChainWeightTimestamp,
transactionV5Timestamp,
transactionV6Timestamp,
disableReferenceTimestamp;
disableReferenceTimestamp,
increaseOnlineAccountsDifficultyTimestamp;
}
// Custom transaction fees
@@ -195,10 +196,6 @@ public class BlockChain {
* featureTriggers because unit tests need to set this value via Reflection. */
private long onlineAccountsModulusV2Timestamp;
/** Feature trigger timestamp for online accounts mempow verification. Can't use featureTriggers
* because unit tests need to set this value via Reflection. */
private long onlineAccountsMemoryPoWTimestamp;
/** Max reward shares by block height */
public static class MaxRewardSharesByTimestamp {
public long timestamp;
@@ -359,10 +356,6 @@ public class BlockChain {
return this.onlineAccountsModulusV2Timestamp;
}
public long getOnlineAccountsMemoryPoWTimestamp() {
return this.onlineAccountsMemoryPoWTimestamp;
}
/** Returns true if approval-needing transaction types require a txGroupId other than NO_GROUP. */
public boolean getRequireGroupForApproval() {
return this.requireGroupForApproval;
@@ -486,6 +479,10 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.disableReferenceTimestamp.name()).longValue();
}
public long getIncreaseOnlineAccountsDifficultyTimestamp() {
return this.featureTriggers.get(FeatureTrigger.increaseOnlineAccountsDifficultyTimestamp.name()).longValue();
}
// More complex getters for aspects that change by height or timestamp

View File

@@ -26,6 +26,9 @@ import org.qortal.data.block.CommonBlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.BlockSummariesV2Message;
import org.qortal.network.message.HeightV2Message;
import org.qortal.network.message.Message;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -90,6 +93,8 @@ public class BlockMinter extends Thread {
List<Block> newBlocks = new ArrayList<>();
final boolean isSingleNodeTestnet = Settings.getInstance().isSingleNodeTestnet();
try (final Repository repository = RepositoryManager.getRepository()) {
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
@@ -108,8 +113,9 @@ public class BlockMinter extends Thread {
// Free up any repository locks
repository.discardChanges();
// Sleep for a while
Thread.sleep(1000);
// Sleep for a while.
// It's faster on single node testnets, to allow lots of blocks to be minted quickly.
Thread.sleep(isSingleNodeTestnet ? 50 : 1000);
isMintingPossible = false;
@@ -220,9 +226,10 @@ public class BlockMinter extends Thread {
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
// Skip this check for single node testnets, since they definitely need to mint every block
byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock) {
if (mintedLastBlock && !isSingleNodeTestnet) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
@@ -241,7 +248,7 @@ public class BlockMinter extends Thread {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't build a to-be-minted block"));
moderatedLog(() -> LOGGER.info("Couldn't build a to-be-minted block"));
continue;
}
@@ -433,11 +440,9 @@ public class BlockMinter extends Thread {
if (newBlockMinted) {
// Broadcast our new chain to network
BlockData newBlockData = newBlock.getBlockData();
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newBlockData));
Network.getInstance().broadcastOurChain();
}
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;

View File

@@ -45,7 +45,6 @@ import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.naming.NameData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.data.transaction.ChatTransactionData;
import org.qortal.data.transaction.TransactionData;
@@ -317,6 +316,10 @@ public class Controller extends Thread {
}
}
public static long uptime() {
return System.currentTimeMillis() - Controller.startTime;
}
/** Returns highest block, or null if it's not available. */
public BlockData getChainTip() {
synchronized (this.latestBlocks) {
@@ -727,25 +730,25 @@ public class Controller extends Thread {
public static final Predicate<Peer> hasNoRecentBlock = peer -> {
final Long minLatestBlockTimestamp = getMinimumLatestBlockTimestamp();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getLastBlockTimestamp() == null || peerChainTipData.getLastBlockTimestamp() < minLatestBlockTimestamp;
final BlockSummaryData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getTimestamp() == null || peerChainTipData.getTimestamp() < minLatestBlockTimestamp;
};
public static final Predicate<Peer> hasNoOrSameBlock = peer -> {
final BlockData latestBlockData = getInstance().getChainTip();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || Arrays.equals(latestBlockData.getSignature(), peerChainTipData.getLastBlockSignature());
final BlockSummaryData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getSignature() == null || Arrays.equals(latestBlockData.getSignature(), peerChainTipData.getSignature());
};
public static final Predicate<Peer> hasOnlyGenesisBlock = peer -> {
final PeerChainTipData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getLastHeight() == null || peerChainTipData.getLastHeight() == 1;
final BlockSummaryData peerChainTipData = peer.getChainTipData();
return peerChainTipData == null || peerChainTipData.getHeight() == 1;
};
public static final Predicate<Peer> hasInferiorChainTip = peer -> {
final PeerChainTipData peerChainTipData = peer.getChainTipData();
final BlockSummaryData peerChainTipData = peer.getChainTipData();
final List<ByteArray> inferiorChainTips = Synchronizer.getInstance().inferiorChainSignatures;
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || inferiorChainTips.contains(ByteArray.wrap(peerChainTipData.getLastBlockSignature()));
return peerChainTipData == null || peerChainTipData.getSignature() == null || inferiorChainTips.contains(ByteArray.wrap(peerChainTipData.getSignature()));
};
public static final Predicate<Peer> hasOldVersion = peer -> {
@@ -835,6 +838,12 @@ public class Controller extends Thread {
String tooltip = String.format("%s - %d %s", actionText, numberOfPeers, connectionsText);
if (!Settings.getInstance().isLite()) {
tooltip = tooltip.concat(String.format(" - %s %d", heightText, height));
final Integer blocksRemaining = Synchronizer.getInstance().getBlocksRemaining();
if (blocksRemaining != null && blocksRemaining > 0) {
String blocksRemainingText = Translator.INSTANCE.translate("SysTray", "BLOCKS_REMAINING");
tooltip = tooltip.concat(String.format(" - %d %s", blocksRemaining, blocksRemainingText));
}
}
tooltip = tooltip.concat(String.format("\n%s: %s", Translator.INSTANCE.translate("SysTray", "BUILD_VERSION"), this.buildVersion));
SysTray.getInstance().setToolTipText(tooltip);
@@ -1007,8 +1016,7 @@ public class Controller extends Thread {
network.broadcast(peer -> peer.isOutbound() ? network.buildPeersMessage(peer) : new GetPeersMessage());
// Send our current height
BlockData latestBlockData = getChainTip();
network.broadcast(peer -> network.buildHeightMessage(peer, latestBlockData));
network.broadcastOurChain();
// Request unconfirmed transaction signatures, but only if we're up-to-date.
// If we're NOT up-to-date then priority is synchronizing first
@@ -1215,6 +1223,10 @@ public class Controller extends Thread {
onNetworkHeightV2Message(peer, message);
break;
case BLOCK_SUMMARIES_V2:
onNetworkBlockSummariesV2Message(peer, message);
break;
case GET_TRANSACTION:
TransactionImporter.getInstance().onNetworkGetTransactionMessage(peer, message);
break;
@@ -1232,19 +1244,10 @@ public class Controller extends Thread {
break;
case GET_ONLINE_ACCOUNTS:
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsMessage(peer, message);
break;
case ONLINE_ACCOUNTS:
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsMessage(peer, message);
break;
case GET_ONLINE_ACCOUNTS_V2:
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsV2Message(peer, message);
break;
case ONLINE_ACCOUNTS_V2:
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsV2Message(peer, message);
// No longer supported - to be eventually removed
break;
case GET_ONLINE_ACCOUNTS_V3:
@@ -1378,8 +1381,10 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer's synchronizer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'block unknown' response to peer %s for GET_BLOCK request for unknown block %s", peer, Base58.encode(signature)));
// We'll send empty block summaries message as it's very short
Message blockUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message blockUnknownMessage = peer.getPeersVersion() >= GenericUnknownMessage.MINIMUM_PEER_VERSION
? new GenericUnknownMessage()
: new BlockSummariesMessage(Collections.emptyList());
blockUnknownMessage.setId(message.getId());
if (!peer.sendMessage(blockUnknownMessage))
peer.disconnect("failed to send block-unknown response");
@@ -1428,11 +1433,15 @@ public class Controller extends Thread {
this.stats.getBlockSummariesStats.requests.incrementAndGet();
// If peer's parent signature matches our latest block signature
// then we can short-circuit with an empty response
// then we have no blocks after that and can short-circuit with an empty response
BlockData chainTip = getChainTip();
if (chainTip != null && Arrays.equals(parentSignature, chainTip.getSignature())) {
Message blockSummariesMessage = new BlockSummariesMessage(Collections.emptyList());
Message blockSummariesMessage = peer.getPeersVersion() >= BlockSummariesV2Message.MINIMUM_PEER_VERSION
? new BlockSummariesV2Message(Collections.emptyList())
: new BlockSummariesMessage(Collections.emptyList());
blockSummariesMessage.setId(message.getId());
if (!peer.sendMessage(blockSummariesMessage))
peer.disconnect("failed to send block summaries");
@@ -1488,7 +1497,9 @@ public class Controller extends Thread {
this.stats.getBlockSummariesStats.fullyFromCache.incrementAndGet();
}
Message blockSummariesMessage = new BlockSummariesMessage(blockSummaries);
Message blockSummariesMessage = peer.getPeersVersion() >= BlockSummariesV2Message.MINIMUM_PEER_VERSION
? new BlockSummariesV2Message(blockSummaries)
: new BlockSummariesMessage(blockSummaries);
blockSummariesMessage.setId(message.getId());
if (!peer.sendMessage(blockSummariesMessage))
peer.disconnect("failed to send block summaries");
@@ -1563,18 +1574,59 @@ public class Controller extends Thread {
// If peer is inbound and we've not updated their height
// then this is probably their initial HEIGHT_V2 message
// so they need a corresponding HEIGHT_V2 message from us
if (!peer.isOutbound() && (peer.getChainTipData() == null || peer.getChainTipData().getLastHeight() == null))
peer.sendMessage(Network.getInstance().buildHeightMessage(peer, getChainTip()));
if (!peer.isOutbound() && peer.getChainTipData() == null) {
Message responseMessage = Network.getInstance().buildHeightOrChainTipInfo(peer);
if (responseMessage == null || !peer.sendMessage(responseMessage)) {
peer.disconnect("failed to send our chain tip info");
return;
}
}
}
// Update peer chain tip data
PeerChainTipData newChainTipData = new PeerChainTipData(heightV2Message.getHeight(), heightV2Message.getSignature(), heightV2Message.getTimestamp(), heightV2Message.getMinterPublicKey());
BlockSummaryData newChainTipData = new BlockSummaryData(heightV2Message.getHeight(), heightV2Message.getSignature(), heightV2Message.getMinterPublicKey(), heightV2Message.getTimestamp());
peer.setChainTipData(newChainTipData);
// Potentially synchronize
Synchronizer.getInstance().requestSync();
}
private void onNetworkBlockSummariesV2Message(Peer peer, Message message) {
BlockSummariesV2Message blockSummariesV2Message = (BlockSummariesV2Message) message;
if (!Settings.getInstance().isLite()) {
// If peer is inbound and we've not updated their height
// then this is probably their initial BLOCK_SUMMARIES_V2 message
// so they need a corresponding BLOCK_SUMMARIES_V2 message from us
if (!peer.isOutbound() && peer.getChainTipData() == null) {
Message responseMessage = Network.getInstance().buildHeightOrChainTipInfo(peer);
if (responseMessage == null || !peer.sendMessage(responseMessage)) {
peer.disconnect("failed to send our chain tip info");
return;
}
}
}
if (message.hasId()) {
/*
* Experimental proof-of-concept: discard messages with ID
* These are 'late' reply messages received after timeout has expired,
* having been passed upwards from Peer to Network to Controller.
* Hence, these are NOT simple "here's my chain tip" broadcasts from other peers.
*/
LOGGER.debug("Discarding late {} message with ID {} from {}", message.getType().name(), message.getId(), peer);
return;
}
// Update peer chain tip data
peer.setChainTipSummaries(blockSummariesV2Message.getBlockSummaries());
// Potentially synchronize
Synchronizer.getInstance().requestSync();
}
private void onNetworkGetAccountMessage(Peer peer, Message message) {
GetAccountMessage getAccountMessage = (GetAccountMessage) message;
String address = getAccountMessage.getAddress();
@@ -1590,8 +1642,8 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'account unknown' response to peer %s for GET_ACCOUNT request for unknown account %s", peer, address));
// We'll send empty block summaries message as it's very short
Message accountUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message accountUnknownMessage = new GenericUnknownMessage();
accountUnknownMessage.setId(message.getId());
if (!peer.sendMessage(accountUnknownMessage))
peer.disconnect("failed to send account-unknown response");
@@ -1626,8 +1678,8 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'account unknown' response to peer %s for GET_ACCOUNT_BALANCE request for unknown account %s and asset ID %d", peer, address, assetId));
// We'll send empty block summaries message as it's very short
Message accountUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message accountUnknownMessage = new GenericUnknownMessage();
accountUnknownMessage.setId(message.getId());
if (!peer.sendMessage(accountUnknownMessage))
peer.disconnect("failed to send account-unknown response");
@@ -1670,8 +1722,8 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'account unknown' response to peer %s for GET_ACCOUNT_TRANSACTIONS request for unknown account %s", peer, address));
// We'll send empty block summaries message as it's very short
Message accountUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message accountUnknownMessage = new GenericUnknownMessage();
accountUnknownMessage.setId(message.getId());
if (!peer.sendMessage(accountUnknownMessage))
peer.disconnect("failed to send account-unknown response");
@@ -1707,8 +1759,8 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'account unknown' response to peer %s for GET_ACCOUNT_NAMES request for unknown account %s", peer, address));
// We'll send empty block summaries message as it's very short
Message accountUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message accountUnknownMessage = new GenericUnknownMessage();
accountUnknownMessage.setId(message.getId());
if (!peer.sendMessage(accountUnknownMessage))
peer.disconnect("failed to send account-unknown response");
@@ -1742,8 +1794,8 @@ public class Controller extends Thread {
// Send valid, yet unexpected message type in response, so peer doesn't have to wait for timeout
LOGGER.debug(() -> String.format("Sending 'name unknown' response to peer %s for GET_NAME request for unknown name %s", peer, name));
// We'll send empty block summaries message as it's very short
Message nameUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message nameUnknownMessage = new GenericUnknownMessage();
nameUnknownMessage.setId(message.getId());
if (!peer.sendMessage(nameUnknownMessage))
peer.disconnect("failed to send name-unknown response");
@@ -1791,14 +1843,14 @@ public class Controller extends Thread {
continue;
}
final PeerChainTipData peerChainTipData = peer.getChainTipData();
BlockSummaryData peerChainTipData = peer.getChainTipData();
if (peerChainTipData == null) {
iterator.remove();
continue;
}
// Disregard peers that don't have a recent block
if (peerChainTipData.getLastBlockTimestamp() == null || peerChainTipData.getLastBlockTimestamp() < minLatestBlockTimestamp) {
if (peerChainTipData.getTimestamp() == null || peerChainTipData.getTimestamp() < minLatestBlockTimestamp) {
iterator.remove();
continue;
}
@@ -1826,6 +1878,10 @@ public class Controller extends Thread {
if (latestBlockData == null || latestBlockData.getTimestamp() < minLatestBlockTimestamp)
return false;
if (Settings.getInstance().isSingleNodeTestnet())
// Single node testnets won't have peers, so we can assume up to date from this point
return true;
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
if (peers == null)

View File

@@ -53,17 +53,30 @@ public class OnlineAccountsManager {
*/
private static final int MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS = 3;
private static final long ONLINE_ACCOUNTS_QUEUE_INTERVAL = 100L; //ms
private static final long ONLINE_ACCOUNTS_QUEUE_INTERVAL = 100L; // ms
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_LEGACY_BROADCAST_INTERVAL = 60 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 5 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_COMPUTE_INTERVAL = 5 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 60 * 1000L; // ms
// After switching to a new online timestamp, we "burst" the online accounts requests
// at an increased interval for a specified amount of time
private static final long ONLINE_ACCOUNTS_BROADCAST_BURST_INTERVAL = 5 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_BURST_LENGTH = 5 * 60 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_V2_PEER_VERSION = 0x0300020000L; // v3.2.0
private static final long ONLINE_ACCOUNTS_V3_PEER_VERSION = 0x0300040000L; // v3.4.0
private static final long ONLINE_ACCOUNTS_COMPUTE_INITIAL_SLEEP_INTERVAL = 30 * 1000L; // ms
// MemoryPoW
public final int POW_BUFFER_SIZE = 1 * 1024 * 1024; // bytes
public int POW_DIFFICULTY = 18; // leading zero bits
// MemoryPoW - mainnet
public static final int POW_BUFFER_SIZE = 1 * 1024 * 1024; // bytes
public static final int POW_DIFFICULTY_V1 = 18; // leading zero bits
public static final int POW_DIFFICULTY_V2 = 19; // leading zero bits
// MemoryPoW - testnet
public static final int POW_BUFFER_SIZE_TESTNET = 1 * 1024 * 1024; // bytes
public static final int POW_DIFFICULTY_TESTNET = 5; // leading zero bits
// IMPORTANT: if we ever need to dynamically modify the buffer size using a feature trigger, the
// pre-allocated buffer below will NOT work, and we should instead use a dynamically allocated
// one for the transition period.
private static long[] POW_VERIFY_WORK_BUFFER = new long[getPoWBufferSize() / 8];
private final ScheduledExecutorService executor = Executors.newScheduledThreadPool(4, new NamedThreadFactory("OnlineAccounts"));
private volatile boolean isStopping = false;
@@ -85,6 +98,8 @@ public class OnlineAccountsManager {
*/
private final SortedMap<Long, Set<OnlineAccountData>> latestBlocksOnlineAccounts = new ConcurrentSkipListMap<>();
private long lastOnlineAccountsRequest = 0;
private boolean hasOurOnlineAccounts = false;
public static long getOnlineTimestampModulus() {
@@ -107,6 +122,23 @@ public class OnlineAccountsManager {
return (timestamp / getOnlineTimestampModulus()) * getOnlineTimestampModulus();
}
private static int getPoWBufferSize() {
if (Settings.getInstance().isTestNet())
return POW_BUFFER_SIZE_TESTNET;
return POW_BUFFER_SIZE;
}
private static int getPoWDifficulty(long timestamp) {
if (Settings.getInstance().isTestNet())
return POW_DIFFICULTY_TESTNET;
if (timestamp >= BlockChain.getInstance().getIncreaseOnlineAccountsDifficultyTimestamp())
return POW_DIFFICULTY_V2;
return POW_DIFFICULTY_V1;
}
private OnlineAccountsManager() {
}
@@ -122,16 +154,16 @@ public class OnlineAccountsManager {
// Expire old online accounts signatures
executor.scheduleAtFixedRate(this::expireOldOnlineAccounts, ONLINE_ACCOUNTS_TASKS_INTERVAL, ONLINE_ACCOUNTS_TASKS_INTERVAL, TimeUnit.MILLISECONDS);
// Send our online accounts
executor.scheduleAtFixedRate(this::sendOurOnlineAccountsInfo, ONLINE_ACCOUNTS_BROADCAST_INTERVAL, ONLINE_ACCOUNTS_BROADCAST_INTERVAL, TimeUnit.MILLISECONDS);
// Request online accounts from peers (legacy)
executor.scheduleAtFixedRate(this::requestLegacyRemoteOnlineAccounts, ONLINE_ACCOUNTS_LEGACY_BROADCAST_INTERVAL, ONLINE_ACCOUNTS_LEGACY_BROADCAST_INTERVAL, TimeUnit.MILLISECONDS);
// Request online accounts from peers (V3+)
executor.scheduleAtFixedRate(this::requestRemoteOnlineAccounts, ONLINE_ACCOUNTS_BROADCAST_INTERVAL, ONLINE_ACCOUNTS_BROADCAST_INTERVAL, TimeUnit.MILLISECONDS);
// Request online accounts from peers
executor.scheduleAtFixedRate(this::requestRemoteOnlineAccounts, ONLINE_ACCOUNTS_BROADCAST_BURST_INTERVAL, ONLINE_ACCOUNTS_BROADCAST_BURST_INTERVAL, TimeUnit.MILLISECONDS);
// Process import queue
executor.scheduleWithFixedDelay(this::processOnlineAccountsImportQueue, ONLINE_ACCOUNTS_QUEUE_INTERVAL, ONLINE_ACCOUNTS_QUEUE_INTERVAL, TimeUnit.MILLISECONDS);
// Send our online accounts (using increased initial delay)
// This allows some time for initial online account lists to be retrieved, and
// reduces the chances of the same nonce being computed twice
executor.scheduleAtFixedRate(this::sendOurOnlineAccountsInfo, ONLINE_ACCOUNTS_COMPUTE_INITIAL_SLEEP_INTERVAL, ONLINE_ACCOUNTS_COMPUTE_INTERVAL, TimeUnit.MILLISECONDS);
}
public void shutdown() {
@@ -151,7 +183,6 @@ public class OnlineAccountsManager {
return;
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
final boolean mempowActive = onlineAccountsTimestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp();
Set<OnlineAccountData> replacementAccounts = new HashSet<>();
for (PrivateKeyAccount onlineAccount : onlineAccounts) {
@@ -160,7 +191,7 @@ public class OnlineAccountsManager {
byte[] signature = Qortal25519Extras.signForAggregation(onlineAccount.getPrivateKey(), timestampBytes);
byte[] publicKey = onlineAccount.getPublicKey();
Integer nonce = mempowActive ? new Random().nextInt(500000) : null;
Integer nonce = new Random().nextInt(500000);
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey, nonce);
replacementAccounts.add(ourOnlineAccountData);
@@ -180,25 +211,37 @@ public class OnlineAccountsManager {
LOGGER.debug("Processing online accounts import queue (size: {})", this.onlineAccountsImportQueue.size());
Set<OnlineAccountData> onlineAccountsToAdd = new HashSet<>();
Set<OnlineAccountData> onlineAccountsToRemove = new HashSet<>();
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : this.onlineAccountsImportQueue) {
if (isStopping)
return;
// Skip this account if it's already validated
Set<OnlineAccountData> onlineAccounts = this.currentOnlineAccounts.get(onlineAccountData.getTimestamp());
if (onlineAccounts != null && onlineAccounts.contains(onlineAccountData)) {
// We have already validated this online account
onlineAccountsImportQueue.remove(onlineAccountData);
continue;
}
boolean isValid = this.isValidCurrentAccount(repository, onlineAccountData);
if (isValid)
onlineAccountsToAdd.add(onlineAccountData);
// Remove from queue
onlineAccountsImportQueue.remove(onlineAccountData);
// Don't remove from the queue yet - we'll do this at the end of the process
// This prevents duplicates being added to the queue whilst it's being processed
onlineAccountsToRemove.add(onlineAccountData);
}
} catch (DataException e) {
LOGGER.error("Repository issue while verifying online accounts", e);
}
if (!onlineAccountsToAdd.isEmpty()) {
LOGGER.debug("Merging {} validated online accounts from import queue", onlineAccountsToAdd.size());
addAccounts(onlineAccountsToAdd);
} finally {
if (!onlineAccountsToAdd.isEmpty()) {
LOGGER.debug("Merging {} validated online accounts from import queue", onlineAccountsToAdd.size());
addAccounts(onlineAccountsToAdd);
}
onlineAccountsImportQueue.removeAll(onlineAccountsToRemove);
}
}
@@ -304,12 +347,10 @@ public class OnlineAccountsManager {
return false;
}
// Validate mempow if feature trigger is active
if (now >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
if (!getInstance().verifyMemoryPoW(onlineAccountData, now)) {
LOGGER.trace(() -> String.format("Rejecting online reward-share for account %s due to invalid PoW nonce", mintingAccount.getAddress()));
return false;
}
// Validate mempow
if (!getInstance().verifyMemoryPoW(onlineAccountData, POW_VERIFY_WORK_BUFFER)) {
LOGGER.trace(() -> String.format("Rejecting online reward-share for account %s due to invalid PoW nonce", mintingAccount.getAddress()));
return false;
}
return true;
@@ -333,7 +374,7 @@ public class OnlineAccountsManager {
for (var entry : hashesToRebuild.entrySet()) {
Long timestamp = entry.getKey();
LOGGER.debug(() -> String.format("Rehashing for timestamp %d and leading bytes %s",
LOGGER.trace(() -> String.format("Rehashing for timestamp %d and leading bytes %s",
timestamp,
entry.getValue().stream().sorted(Byte::compareUnsigned).map(leadingByte -> String.format("%02x", leadingByte)).collect(Collectors.joining(", "))
)
@@ -359,7 +400,7 @@ public class OnlineAccountsManager {
}
}
LOGGER.debug(String.format("we have online accounts for timestamps: %s", String.join(", ", this.currentOnlineAccounts.keySet().stream().map(l -> Long.toString(l)).collect(Collectors.joining(", ")))));
LOGGER.trace(String.format("we have online accounts for timestamps: %s", String.join(", ", this.currentOnlineAccounts.keySet().stream().map(l -> Long.toString(l)).collect(Collectors.joining(", ")))));
return true;
}
@@ -399,30 +440,7 @@ public class OnlineAccountsManager {
}
/**
* Request data from other peers. (Pre-V3)
*/
private void requestLegacyRemoteOnlineAccounts() {
final Long now = NTP.getTime();
if (now == null)
return;
// Don't bother if we're not up to date
if (!Controller.getInstance().isUpToDate())
return;
List<OnlineAccountData> mergedOnlineAccounts = Set.copyOf(this.currentOnlineAccounts.values()).stream().flatMap(Set::stream).collect(Collectors.toList());
Message messageV2 = new GetOnlineAccountsV2Message(mergedOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() < ONLINE_ACCOUNTS_V3_PEER_VERSION
? messageV2
: null
);
}
/**
* Request data from other peers. V3+
* Request data from other peers
*/
private void requestRemoteOnlineAccounts() {
final Long now = NTP.getTime();
@@ -433,13 +451,25 @@ public class OnlineAccountsManager {
if (!Controller.getInstance().isUpToDate())
return;
Message messageV3 = new GetOnlineAccountsV3Message(currentOnlineAccountsHashes);
long onlineAccountsTimestamp = getCurrentOnlineAccountTimestamp();
if (now - onlineAccountsTimestamp >= ONLINE_ACCOUNTS_BROADCAST_BURST_LENGTH) {
// New online timestamp started more than 5 mins ago - we probably don't need to request so frequently
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V3_PEER_VERSION
? messageV3
: null
);
if (Controller.uptime() < ONLINE_ACCOUNTS_BROADCAST_BURST_LENGTH) {
// The node recently started up, so we should request at the burst interval
// This could allow accounts to move around the network more easily when an auto update is occurring
}
else if (now - lastOnlineAccountsRequest < ONLINE_ACCOUNTS_BROADCAST_INTERVAL) {
// We already requested online accounts in the last minute, so no need to request again
return;
}
}
LOGGER.debug("Requesting online accounts via broadcast...");
lastOnlineAccountsRequest = now;
Message messageV3 = new GetOnlineAccountsV3Message(currentOnlineAccountsHashes);
Network.getInstance().broadcast(peer -> messageV3);
}
/**
@@ -464,12 +494,10 @@ public class OnlineAccountsManager {
// 'next' timestamp (prioritize this as it's the most important, if mempow active)
final long nextOnlineAccountsTimestamp = toOnlineAccountTimestamp(now) + getOnlineTimestampModulus();
if (isMemoryPoWActive(now)) {
boolean success = computeOurAccountsForTimestamp(nextOnlineAccountsTimestamp);
if (!success) {
// We didn't compute the required nonce value(s), and so can't proceed until they have been retried
return;
}
boolean success = computeOurAccountsForTimestamp(nextOnlineAccountsTimestamp);
if (!success) {
// We didn't compute the required nonce value(s), and so can't proceed until they have been retried
return;
}
// 'current' timestamp
@@ -522,6 +550,8 @@ public class OnlineAccountsManager {
Set<OnlineAccountData> onlineAccounts = this.currentOnlineAccounts.computeIfAbsent(onlineAccountsTimestamp, k -> ConcurrentHashMap.newKeySet());
boolean alreadyExists = onlineAccounts.stream().anyMatch(a -> Arrays.equals(a.getPublicKey(), publicKey));
if (alreadyExists) {
this.hasOurOnlineAccounts = true;
if (remaining > 0) {
// Move on to next account
continue;
@@ -544,21 +574,15 @@ public class OnlineAccountsManager {
// Compute nonce
Integer nonce;
if (isMemoryPoWActive(NTP.getTime())) {
try {
nonce = this.computeMemoryPoW(mempowBytes, publicKey, onlineAccountsTimestamp);
if (nonce == null) {
// A nonce is required
return false;
}
} catch (TimeoutException e) {
LOGGER.info(String.format("Timed out computing nonce for account %.8s", Base58.encode(publicKey)));
try {
nonce = this.computeMemoryPoW(mempowBytes, publicKey, onlineAccountsTimestamp);
if (nonce == null) {
// A nonce is required
return false;
}
}
else {
// Send -1 if we haven't computed a nonce due to feature trigger timestamp
nonce = -1;
} catch (TimeoutException e) {
LOGGER.info(String.format("Timed out computing nonce for account %.8s", Base58.encode(publicKey)));
return false;
}
byte[] signature = Qortal25519Extras.signForAggregation(privateKey, timestampBytes);
@@ -567,7 +591,7 @@ public class OnlineAccountsManager {
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey, nonce);
// Make sure to verify before adding
if (verifyMemoryPoW(ourOnlineAccountData, NTP.getTime())) {
if (verifyMemoryPoW(ourOnlineAccountData, null)) {
ourOnlineAccounts.add(ourOnlineAccountData);
}
}
@@ -579,17 +603,7 @@ public class OnlineAccountsManager {
if (!hasInfoChanged)
return false;
Message messageV1 = new OnlineAccountsMessage(ourOnlineAccounts);
Message messageV2 = new OnlineAccountsV2Message(ourOnlineAccounts);
Message messageV3 = new OnlineAccountsV3Message(ourOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= OnlineAccountsV3Message.MIN_PEER_VERSION
? messageV3
: peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION
? messageV2
: messageV1
);
Network.getInstance().broadcast(peer -> new OnlineAccountsV3Message(ourOnlineAccounts));
LOGGER.debug("Broadcasted {} online account{} with timestamp {}", ourOnlineAccounts.size(), (ourOnlineAccounts.size() != 1 ? "s" : ""), onlineAccountsTimestamp);
@@ -600,12 +614,6 @@ public class OnlineAccountsManager {
// MemoryPoW
private boolean isMemoryPoWActive(Long timestamp) {
if (timestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp() || Settings.getInstance().isOnlineAccountsMemPoWEnabled()) {
return true;
}
return false;
}
private byte[] getMemoryPoWBytes(byte[] publicKey, long onlineAccountsTimestamp) throws IOException {
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
@@ -617,11 +625,6 @@ public class OnlineAccountsManager {
}
private Integer computeMemoryPoW(byte[] bytes, byte[] publicKey, long onlineAccountsTimestamp) throws TimeoutException {
if (!isMemoryPoWActive(NTP.getTime())) {
LOGGER.info("Mempow start timestamp not yet reached, and onlineAccountsMemPoWEnabled not enabled in settings");
return null;
}
LOGGER.info(String.format("Computing nonce for account %.8s and timestamp %d...", Base58.encode(publicKey), onlineAccountsTimestamp));
// Calculate the time until the next online timestamp and use it as a timeout when computing the nonce
@@ -629,7 +632,8 @@ public class OnlineAccountsManager {
final long nextOnlineAccountsTimestamp = toOnlineAccountTimestamp(startTime) + getOnlineTimestampModulus();
long timeUntilNextTimestamp = nextOnlineAccountsTimestamp - startTime;
Integer nonce = MemoryPoW.compute2(bytes, POW_BUFFER_SIZE, POW_DIFFICULTY, timeUntilNextTimestamp);
int difficulty = getPoWDifficulty(onlineAccountsTimestamp);
Integer nonce = MemoryPoW.compute2(bytes, getPoWBufferSize(), difficulty, timeUntilNextTimestamp);
double totalSeconds = (NTP.getTime() - startTime) / 1000.0f;
int minutes = (int) ((totalSeconds % 3600) / 60);
@@ -638,15 +642,15 @@ public class OnlineAccountsManager {
LOGGER.info(String.format("Computed nonce for timestamp %d and account %.8s: %d. Buffer size: %d. Difficulty: %d. " +
"Time taken: %02d:%02d. Hashrate: %f", onlineAccountsTimestamp, Base58.encode(publicKey),
nonce, POW_BUFFER_SIZE, POW_DIFFICULTY, minutes, seconds, hashRate));
nonce, getPoWBufferSize(), difficulty, minutes, seconds, hashRate));
return nonce;
}
public boolean verifyMemoryPoW(OnlineAccountData onlineAccountData, Long timestamp) {
if (!isMemoryPoWActive(timestamp)) {
// Not active yet, so treat it as valid
return true;
public boolean verifyMemoryPoW(OnlineAccountData onlineAccountData, long[] workBuffer) {
// Require a valid nonce value
if (onlineAccountData.getNonce() == null || onlineAccountData.getNonce() < 0) {
return false;
}
int nonce = onlineAccountData.getNonce();
@@ -659,7 +663,7 @@ public class OnlineAccountsManager {
}
// Verify the nonce
return MemoryPoW.verify2(mempowBytes, POW_BUFFER_SIZE, POW_DIFFICULTY, nonce);
return MemoryPoW.verify2(mempowBytes, workBuffer, getPoWBufferSize(), getPoWDifficulty(onlineAccountData.getTimestamp()), nonce);
}
@@ -697,7 +701,7 @@ public class OnlineAccountsManager {
*/
// Block::mint() - only wants online accounts with (online) timestamp that matches block's (online) timestamp so they can be added to new block
public List<OnlineAccountData> getOnlineAccounts(long onlineTimestamp) {
LOGGER.info(String.format("caller's timestamp: %d, our timestamps: %s", onlineTimestamp, String.join(", ", this.currentOnlineAccounts.keySet().stream().map(l -> Long.toString(l)).collect(Collectors.joining(", ")))));
LOGGER.debug(String.format("caller's timestamp: %d, our timestamps: %s", onlineTimestamp, String.join(", ", this.currentOnlineAccounts.keySet().stream().map(l -> Long.toString(l)).collect(Collectors.joining(", ")))));
return new ArrayList<>(Set.copyOf(this.currentOnlineAccounts.getOrDefault(onlineTimestamp, Collections.emptySet())));
}
@@ -743,11 +747,12 @@ public class OnlineAccountsManager {
* Typically called by {@link Block#areOnlineAccountsValid()}
*/
public void addBlocksOnlineAccounts(Set<OnlineAccountData> blocksOnlineAccounts, Long timestamp) {
// We want to add to 'current' in preference if possible
if (this.currentOnlineAccounts.containsKey(timestamp)) {
addAccounts(blocksOnlineAccounts);
// If these are current accounts, then there is no need to cache them, and should instead rely
// on the more complete entries we already have in self.currentOnlineAccounts.
// Note: since sig-agg, we no longer have individual signatures included in blocks, so we
// mustn't add anything to currentOnlineAccounts from here.
if (this.currentOnlineAccounts.containsKey(timestamp))
return;
}
// Add to block cache instead
this.latestBlocksOnlineAccounts.computeIfAbsent(timestamp, k -> ConcurrentHashMap.newKeySet())
@@ -767,106 +772,6 @@ public class OnlineAccountsManager {
// Network handlers
public void onNetworkGetOnlineAccountsMessage(Peer peer, Message message) {
GetOnlineAccountsMessage getOnlineAccountsMessage = (GetOnlineAccountsMessage) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend = Set.copyOf(this.currentOnlineAccounts.values()).stream().flatMap(Set::stream).collect(Collectors.toList());
int prefilterSize = accountsToSend.size();
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (OnlineAccountData excludeAccountData : excludeAccounts) {
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
break;
}
}
}
if (accountsToSend.isEmpty())
return;
Message onlineAccountsMessage = new OnlineAccountsMessage(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.debug("Sent {} of our {} online accounts to {}", accountsToSend.size(), prefilterSize, peer);
}
public void onNetworkOnlineAccountsMessage(Peer peer, Message message) {
OnlineAccountsMessage onlineAccountsMessage = (OnlineAccountsMessage) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.debug("Received {} online accounts from {}", peersOnlineAccounts.size(), peer);
int importCount = 0;
// Add any online accounts to the queue that aren't already present
for (OnlineAccountData onlineAccountData : peersOnlineAccounts) {
boolean isNewEntry = onlineAccountsImportQueue.add(onlineAccountData);
if (isNewEntry)
importCount++;
}
if (importCount > 0)
LOGGER.debug("Added {} online accounts to queue", importCount);
}
public void onNetworkGetOnlineAccountsV2Message(Peer peer, Message message) {
GetOnlineAccountsV2Message getOnlineAccountsMessage = (GetOnlineAccountsV2Message) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend = Set.copyOf(this.currentOnlineAccounts.values()).stream().flatMap(Set::stream).collect(Collectors.toList());
int prefilterSize = accountsToSend.size();
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (OnlineAccountData excludeAccountData : excludeAccounts) {
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
break;
}
}
}
if (accountsToSend.isEmpty())
return;
Message onlineAccountsMessage = new OnlineAccountsV2Message(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.debug("Sent {} of our {} online accounts to {}", accountsToSend.size(), prefilterSize, peer);
}
public void onNetworkOnlineAccountsV2Message(Peer peer, Message message) {
OnlineAccountsV2Message onlineAccountsMessage = (OnlineAccountsV2Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.debug("Received {} online accounts from {}", peersOnlineAccounts.size(), peer);
int importCount = 0;
// Add any online accounts to the queue that aren't already present
for (OnlineAccountData onlineAccountData : peersOnlineAccounts) {
boolean isNewEntry = onlineAccountsImportQueue.add(onlineAccountData);
if (isNewEntry)
importCount++;
}
if (importCount > 0)
LOGGER.debug("Added {} online accounts to queue", importCount);
}
public void onNetworkGetOnlineAccountsV3Message(Peer peer, Message message) {
GetOnlineAccountsV3Message getOnlineAccountsMessage = (GetOnlineAccountsV3Message) message;
@@ -887,7 +792,7 @@ public class OnlineAccountsManager {
Set<OnlineAccountData> timestampsOnlineAccounts = this.currentOnlineAccounts.getOrDefault(timestamp, Collections.emptySet());
outgoingOnlineAccounts.addAll(timestampsOnlineAccounts);
LOGGER.debug(() -> String.format("Going to send all %d online accounts for timestamp %d", timestampsOnlineAccounts.size(), timestamp));
LOGGER.trace(() -> String.format("Going to send all %d online accounts for timestamp %d", timestampsOnlineAccounts.size(), timestamp));
} else {
// Quick cache of which leading bytes to send so we only have to filter once
Set<Byte> outgoingLeadingBytes = new HashSet<>();
@@ -911,7 +816,7 @@ public class OnlineAccountsManager {
.forEach(outgoingOnlineAccounts::add);
if (outgoingOnlineAccounts.size() > beforeAddSize)
LOGGER.debug(String.format("Going to send %d online accounts for timestamp %d and leading bytes %s",
LOGGER.trace(String.format("Going to send %d online accounts for timestamp %d and leading bytes %s",
outgoingOnlineAccounts.size() - beforeAddSize,
timestamp,
outgoingLeadingBytes.stream().sorted(Byte::compareUnsigned).map(leadingByte -> String.format("%02x", leadingByte)).collect(Collectors.joining(", "))
@@ -920,25 +825,27 @@ public class OnlineAccountsManager {
}
}
peer.sendMessage(
peer.getPeersVersion() >= OnlineAccountsV3Message.MIN_PEER_VERSION ?
new OnlineAccountsV3Message(outgoingOnlineAccounts) :
new OnlineAccountsV2Message(outgoingOnlineAccounts)
);
peer.sendMessage(new OnlineAccountsV3Message(outgoingOnlineAccounts));
LOGGER.debug("Sent {} online accounts to {}", outgoingOnlineAccounts.size(), peer);
LOGGER.trace("Sent {} online accounts to {}", outgoingOnlineAccounts.size(), peer);
}
public void onNetworkOnlineAccountsV3Message(Peer peer, Message message) {
OnlineAccountsV3Message onlineAccountsMessage = (OnlineAccountsV3Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.debug("Received {} online accounts from {}", peersOnlineAccounts.size(), peer);
LOGGER.trace("Received {} online accounts from {}", peersOnlineAccounts.size(), peer);
int importCount = 0;
// Add any online accounts to the queue that aren't already present
for (OnlineAccountData onlineAccountData : peersOnlineAccounts) {
Set<OnlineAccountData> onlineAccounts = this.currentOnlineAccounts.computeIfAbsent(onlineAccountData.getTimestamp(), k -> ConcurrentHashMap.newKeySet());
if (onlineAccounts.contains(onlineAccountData))
// We have already validated this online account
continue;
boolean isNewEntry = onlineAccountsImportQueue.add(onlineAccountData);
if (isNewEntry)

View File

@@ -4,6 +4,7 @@ import com.rust.litewalletjni.LiteWalletJni;
import org.apache.commons.io.FileUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.json.JSONException;
import org.json.JSONObject;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataReader;
@@ -99,14 +100,19 @@ public class PirateChainWalletController extends Thread {
LOGGER.debug("Syncing Pirate Chain wallet...");
String response = LiteWalletJni.execute("sync", "");
LOGGER.debug("sync response: {}", response);
JSONObject json = new JSONObject(response);
if (json.has("result")) {
String result = json.getString("result");
// We may have to set wallet to ready if this is the first ever successful sync
if (Objects.equals(result, "success")) {
this.currentWallet.setReady(true);
try {
JSONObject json = new JSONObject(response);
if (json.has("result")) {
String result = json.getString("result");
// We may have to set wallet to ready if this is the first ever successful sync
if (Objects.equals(result, "success")) {
this.currentWallet.setReady(true);
}
}
} catch (JSONException e) {
LOGGER.info("Unable to interpret JSON", e);
}
// Rate limit sync attempts

View File

@@ -19,7 +19,6 @@ import org.qortal.block.BlockChain;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.CommonBlockData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.transaction.RewardShareTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.event.Event;
@@ -54,7 +53,8 @@ public class Synchronizer extends Thread {
/** Maximum number of block signatures we ask from peer in one go */
private static final int MAXIMUM_REQUEST_SIZE = 200; // XXX move to Settings?
private static final long RECOVERY_MODE_TIMEOUT = 10 * 60 * 1000L; // ms
/** Maximum number of consecutive failed sync attempts before marking peer as misbehaved */
private static final int MAX_CONSECUTIVE_FAILED_SYNC_ATTEMPTS = 3;
private boolean running;
@@ -76,6 +76,8 @@ public class Synchronizer extends Thread {
private volatile boolean isSynchronizing = false;
/** Temporary estimate of synchronization progress for SysTray use. */
private volatile int syncPercent = 0;
/** Temporary estimate of blocks remaining for SysTray use. */
private volatile int blocksRemaining = 0;
private static volatile boolean requestSync = false;
private boolean syncRequestPending = false;
@@ -181,6 +183,18 @@ public class Synchronizer extends Thread {
}
}
public Integer getBlocksRemaining() {
synchronized (this.syncLock) {
// Report as 0 blocks remaining if the latest block is within the last 60 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (60 * 60 * 1000L);
if (Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
return 0;
}
return this.isSynchronizing ? this.blocksRemaining : null;
}
}
public void requestSync() {
requestSync = true;
}
@@ -282,7 +296,7 @@ public class Synchronizer extends Thread {
BlockData priorChainTip = Controller.getInstance().getChainTip();
synchronized (this.syncLock) {
this.syncPercent = (priorChainTip.getHeight() * 100) / peer.getChainTipData().getLastHeight();
this.syncPercent = (priorChainTip.getHeight() * 100) / peer.getChainTipData().getHeight();
// Only update SysTray if we're potentially changing height
if (this.syncPercent < 100) {
@@ -312,7 +326,7 @@ public class Synchronizer extends Thread {
case INFERIOR_CHAIN: {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getLastBlockSignature());
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getSignature());
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
@@ -320,7 +334,8 @@ public class Synchronizer extends Thread {
LOGGER.debug(() -> String.format("Refused to synchronize with peer %s (%s)", peer, syncResult.name()));
// Notify peer of our superior chain
if (!peer.sendMessage(Network.getInstance().buildHeightMessage(peer, priorChainTip)))
Message message = Network.getInstance().buildHeightOrChainTipInfo(peer);
if (message == null || !peer.sendMessage(message))
peer.disconnect("failed to notify peer of our superior chain");
break;
}
@@ -341,7 +356,7 @@ public class Synchronizer extends Thread {
// fall-through...
case NOTHING_TO_DO: {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getLastBlockSignature());
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getSignature());
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
@@ -369,8 +384,7 @@ public class Synchronizer extends Thread {
// Reset our cache of inferior chains
inferiorChainSignatures.clear();
Network network = Network.getInstance();
network.broadcast(broadcastPeer -> network.buildHeightMessage(broadcastPeer, newChainTip));
Network.getInstance().broadcastOurChain();
EventBus.INSTANCE.notify(new NewChainTipEvent(priorChainTip, newChainTip));
}
@@ -397,9 +411,10 @@ public class Synchronizer extends Thread {
timePeersLastAvailable = NTP.getTime();
// If enough time has passed, enter recovery mode, which lifts some restrictions on who we can sync with and when we can mint
if (NTP.getTime() - timePeersLastAvailable > RECOVERY_MODE_TIMEOUT) {
long recoveryModeTimeout = Settings.getInstance().getRecoveryModeTimeout();
if (NTP.getTime() - timePeersLastAvailable > recoveryModeTimeout) {
if (recoveryMode == false) {
LOGGER.info(String.format("Peers have been unavailable for %d minutes. Entering recovery mode...", RECOVERY_MODE_TIMEOUT/60/1000));
LOGGER.info(String.format("Peers have been unavailable for %d minutes. Entering recovery mode...", recoveryModeTimeout/60/1000));
recoveryMode = true;
}
}
@@ -513,13 +528,13 @@ public class Synchronizer extends Thread {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final int ourInitialHeight = ourLatestBlockData.getHeight();
PeerChainTipData peerChainTipData = peer.getChainTipData();
int peerHeight = peerChainTipData.getLastHeight();
byte[] peersLastBlockSignature = peerChainTipData.getLastBlockSignature();
BlockSummaryData peerChainTipData = peer.getChainTipData();
int peerHeight = peerChainTipData.getHeight();
byte[] peersLastBlockSignature = peerChainTipData.getSignature();
byte[] ourLastBlockSignature = ourLatestBlockData.getSignature();
LOGGER.debug(String.format("Fetching summaries from peer %s at height %d, sig %.8s, ts %d; our height %d, sig %.8s, ts %d", peer,
peerHeight, Base58.encode(peersLastBlockSignature), peer.getChainTipData().getLastBlockTimestamp(),
peerHeight, Base58.encode(peersLastBlockSignature), peerChainTipData.getTimestamp(),
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp()));
List<BlockSummaryData> peerBlockSummaries = new ArrayList<>();
@@ -637,9 +652,9 @@ public class Synchronizer extends Thread {
return peers;
// Count the number of blocks this peer has beyond our common block
final PeerChainTipData peerChainTipData = peer.getChainTipData();
final int peerHeight = peerChainTipData.getLastHeight();
final byte[] peerLastBlockSignature = peerChainTipData.getLastBlockSignature();
final BlockSummaryData peerChainTipData = peer.getChainTipData();
final int peerHeight = peerChainTipData.getHeight();
final byte[] peerLastBlockSignature = peerChainTipData.getSignature();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
// Limit the number of blocks we are comparing. FUTURE: we could request more in batches, but there may not be a case when this is needed
int summariesRequired = Math.min(peerAdditionalBlocksAfterCommonBlock, MAXIMUM_REQUEST_SIZE);
@@ -727,8 +742,9 @@ public class Synchronizer extends Thread {
LOGGER.debug(String.format("Listing peers with common block %.8s...", Base58.encode(commonBlockSummary.getSignature())));
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
BlockSummaryData peerChainTipData = peer.getChainTipData();
final int peerHeight = peerChainTipData.getHeight();
final Long peerLastBlockTimestamp = peerChainTipData.getTimestamp();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
final CommonBlockData peerCommonBlockData = peer.getCommonBlockData();
@@ -825,7 +841,7 @@ public class Synchronizer extends Thread {
// Calculate the length of the shortest peer chain sharing this common block
int minChainLength = 0;
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final int peerHeight = peer.getChainTipData().getHeight();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
if (peerAdditionalBlocksAfterCommonBlock < minChainLength || minChainLength == 0)
@@ -933,13 +949,13 @@ public class Synchronizer extends Thread {
final BlockData ourLatestBlockData = repository.getBlockRepository().getLastBlock();
final int ourInitialHeight = ourLatestBlockData.getHeight();
PeerChainTipData peerChainTipData = peer.getChainTipData();
int peerHeight = peerChainTipData.getLastHeight();
byte[] peersLastBlockSignature = peerChainTipData.getLastBlockSignature();
BlockSummaryData peerChainTipData = peer.getChainTipData();
int peerHeight = peerChainTipData.getHeight();
byte[] peersLastBlockSignature = peerChainTipData.getSignature();
byte[] ourLastBlockSignature = ourLatestBlockData.getSignature();
String syncString = String.format("Synchronizing with peer %s at height %d, sig %.8s, ts %d; our height %d, sig %.8s, ts %d", peer,
peerHeight, Base58.encode(peersLastBlockSignature), peer.getChainTipData().getLastBlockTimestamp(),
peerHeight, Base58.encode(peersLastBlockSignature), peerChainTipData.getTimestamp(),
ourInitialHeight, Base58.encode(ourLastBlockSignature), ourLatestBlockData.getTimestamp());
LOGGER.info(syncString);
@@ -1246,7 +1262,14 @@ public class Synchronizer extends Thread {
int numberSignaturesRequired = additionalPeerBlocksAfterCommonBlock - peerBlockSignatures.size();
int retryCount = 0;
while (height < peerHeight) {
// Keep fetching blocks from peer until we reach their tip, or reach a count of MAXIMUM_COMMON_DELTA blocks.
// We need to limit the total number, otherwise too much can be loaded into memory, causing an
// OutOfMemoryException. This is common when syncing from 1000+ blocks behind the chain tip, after starting
// from a small fork that didn't become part of the main chain. This causes the entire sync process to
// use syncToPeerChain(), resulting in potentially thousands of blocks being held in memory if the limit
// below isn't applied.
while (height < peerHeight && peerBlocks.size() <= MAXIMUM_COMMON_DELTA) {
if (Controller.isStopping())
return SynchronizationResult.SHUTTING_DOWN;
@@ -1313,7 +1336,7 @@ public class Synchronizer extends Thread {
// Final check to make sure the peer isn't out of date (except for when we're in recovery mode)
if (!recoveryMode && peer.getChainTipData() != null) {
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
final Long peerLastBlockTimestamp = peer.getChainTipData().getTimestamp();
if (peerLastBlockTimestamp == null || peerLastBlockTimestamp < minLatestBlockTimestamp) {
LOGGER.info(String.format("Peer %s is out of date, so abandoning sync attempt", peer));
return SynchronizationResult.CHAIN_TIP_TOO_OLD;
@@ -1448,6 +1471,12 @@ public class Synchronizer extends Thread {
repository.saveChanges();
synchronized (this.syncLock) {
if (peer.getChainTipData() != null) {
this.blocksRemaining = peer.getChainTipData().getHeight() - newBlock.getBlockData().getHeight();
}
}
Controller.getInstance().onNewBlock(newBlock.getBlockData());
}
@@ -1543,6 +1572,12 @@ public class Synchronizer extends Thread {
repository.saveChanges();
synchronized (this.syncLock) {
if (peer.getChainTipData() != null) {
this.blocksRemaining = peer.getChainTipData().getHeight() - newBlock.getBlockData().getHeight();
}
}
Controller.getInstance().onNewBlock(newBlock.getBlockData());
}
@@ -1553,12 +1588,19 @@ public class Synchronizer extends Thread {
Message getBlockSummariesMessage = new GetBlockSummariesMessage(parentSignature, numberRequested);
Message message = peer.getResponse(getBlockSummariesMessage);
if (message == null || message.getType() != MessageType.BLOCK_SUMMARIES)
if (message == null)
return null;
BlockSummariesMessage blockSummariesMessage = (BlockSummariesMessage) message;
if (message.getType() == MessageType.BLOCK_SUMMARIES) {
BlockSummariesMessage blockSummariesMessage = (BlockSummariesMessage) message;
return blockSummariesMessage.getBlockSummaries();
}
else if (message.getType() == MessageType.BLOCK_SUMMARIES_V2) {
BlockSummariesV2Message blockSummariesMessage = (BlockSummariesV2Message) message;
return blockSummariesMessage.getBlockSummaries();
}
return blockSummariesMessage.getBlockSummaries();
return null;
}
private List<byte[]> getBlockSignatures(Peer peer, byte[] parentSignature, int numberRequested) throws InterruptedException {
@@ -1577,8 +1619,20 @@ public class Synchronizer extends Thread {
Message getBlockMessage = new GetBlockMessage(signature);
Message message = peer.getResponse(getBlockMessage);
if (message == null)
if (message == null) {
peer.getPeerData().incrementFailedSyncCount();
if (peer.getPeerData().getFailedSyncCount() >= MAX_CONSECUTIVE_FAILED_SYNC_ATTEMPTS) {
// Several failed attempts, so mark peer as misbehaved
LOGGER.info("Marking peer {} as misbehaved due to {} failed sync attempts", peer, peer.getPeerData().getFailedSyncCount());
Network.getInstance().peerMisbehaved(peer);
}
return null;
}
// Reset failed sync count now that we have a block response
// FUTURE: we could move this to the end of the sync process, but to reduce risk this can be done
// at a later stage. For now we are only defending against serialization errors or no responses.
peer.getPeerData().setFailedSyncCount(0);
switch (message.getType()) {
case BLOCK: {

View File

@@ -595,9 +595,10 @@ public class ArbitraryDataFileManager extends Thread {
// Send valid, yet unexpected message type in response, so peer's synchronizer doesn't have to wait for timeout
LOGGER.debug(String.format("Sending 'file unknown' response to peer %s for GET_FILE request for unknown file %s", peer, arbitraryDataFile));
// We'll send empty block summaries message as it's very short
// TODO: use a different message type here
Message fileUnknownMessage = new BlockSummariesMessage(Collections.emptyList());
// Send generic 'unknown' message as it's very short
Message fileUnknownMessage = peer.getPeersVersion() >= GenericUnknownMessage.MINIMUM_PEER_VERSION
? new GenericUnknownMessage()
: new BlockSummariesMessage(Collections.emptyList());
fileUnknownMessage.setId(message.getId());
if (!peer.sendMessage(fileUnknownMessage)) {
LOGGER.debug("Couldn't sent file-unknown response");

View File

@@ -16,7 +16,7 @@ public class BlockArchiver implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockArchiver.class);
private static final long INITIAL_SLEEP_PERIOD = 0L; // TODO: 5 * 60 * 1000L + 1234L; // ms
private static final long INITIAL_SLEEP_PERIOD = 5 * 60 * 1000L + 1234L; // ms
public void run() {
Thread.currentThread().setName("Block archiver");

View File

@@ -19,6 +19,7 @@ import org.qortal.data.transaction.MessageTransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
@@ -317,20 +318,27 @@ public class LitecoinACCTv3TradeBot implements AcctTradeBot {
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, messageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
// Do this in a new thread so caller doesn't have to wait for computeNonce()
// In the unlikely event that the transaction doesn't validate then the buy won't happen and eventually Alice's AT will be refunded
new Thread(() -> {
try (final Repository threadsRepository = RepositoryManager.getRepository()) {
PrivateKeyAccount sender = new PrivateKeyAccount(threadsRepository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(threadsRepository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
// reset repository state to prevent deadlock
threadsRepository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to Bob's trade-bot %s: %s", messageRecipient, result.name()));
return ResponseResult.NETWORK_ISSUE;
}
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to Bob's trade-bot %s: %s", messageRecipient, result.name()));
}
} catch (DataException e) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to Bob's trade-bot %s: %s", messageRecipient, e.getMessage()));
}
}, "TradeBot response").start();
}
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Funding P2SH-A %s. Messaged Bob. Waiting for AT-lock", p2shAddress));

View File

@@ -468,9 +468,6 @@ public class TradeBot implements Listener {
List<TradePresenceData> safeTradePresences = List.copyOf(this.safeAllTradePresencesByPubkey.values());
if (safeTradePresences.isEmpty())
return;
LOGGER.debug("Broadcasting all {} known trade presences. Next broadcast timestamp: {}",
safeTradePresences.size(), nextTradePresenceBroadcastTimestamp
);
@@ -637,7 +634,7 @@ public class TradeBot implements Listener {
}
if (newCount > 0) {
LOGGER.debug("New trade presences: {}", newCount);
LOGGER.debug("New trade presences: {}, all trade presences: {}", newCount, allTradePresencesByPubkey.size());
rebuildSafeAllTradePresences();
}
}

View File

@@ -99,6 +99,10 @@ public class MemoryPoW {
}
public static boolean verify2(byte[] data, int workBufferLength, long difficulty, int nonce) {
return verify2(data, null, workBufferLength, difficulty, nonce);
}
public static boolean verify2(byte[] data, long[] workBuffer, int workBufferLength, long difficulty, int nonce) {
// Hash data with SHA256
byte[] hash = Crypto.digest(data);
@@ -111,7 +115,10 @@ public class MemoryPoW {
byteBuffer = null;
int longBufferLength = workBufferLength / 8;
long[] workBuffer = new long[longBufferLength];
if (workBuffer == null)
workBuffer = new long[longBufferLength];
long[] state = new long[4];
long seed = 8682522807148012L;

View File

@@ -24,7 +24,10 @@ public class ArbitraryResourceMetadata {
this.description = description;
this.tags = tags;
this.category = category;
this.categoryName = category.getName();
if (category != null) {
this.categoryName = category.getName();
}
}
public static ArbitraryResourceMetadata fromTransactionMetadata(ArbitraryDataTransactionMetadata transactionMetadata) {

View File

@@ -11,11 +11,12 @@ public class BlockSummaryData {
private int height;
private byte[] signature;
private byte[] minterPublicKey;
private int onlineAccountsCount;
// Optional, set during construction
private Integer onlineAccountsCount;
private Long timestamp;
private Integer transactionCount;
private byte[] reference;
// Optional, set after construction
private Integer minterLevel;
@@ -25,6 +26,15 @@ public class BlockSummaryData {
protected BlockSummaryData() {
}
/** Constructor typically populated with fields from HeightV2Message */
public BlockSummaryData(int height, byte[] signature, byte[] minterPublicKey, long timestamp) {
this.height = height;
this.signature = signature;
this.minterPublicKey = minterPublicKey;
this.timestamp = timestamp;
}
/** Constructor typically populated with fields from BlockSummariesMessage */
public BlockSummaryData(int height, byte[] signature, byte[] minterPublicKey, int onlineAccountsCount) {
this.height = height;
this.signature = signature;
@@ -32,13 +42,16 @@ public class BlockSummaryData {
this.onlineAccountsCount = onlineAccountsCount;
}
public BlockSummaryData(int height, byte[] signature, byte[] minterPublicKey, int onlineAccountsCount, long timestamp, int transactionCount) {
/** Constructor typically populated with fields from BlockSummariesV2Message */
public BlockSummaryData(int height, byte[] signature, byte[] minterPublicKey, Integer onlineAccountsCount,
Long timestamp, Integer transactionCount, byte[] reference) {
this.height = height;
this.signature = signature;
this.minterPublicKey = minterPublicKey;
this.onlineAccountsCount = onlineAccountsCount;
this.timestamp = timestamp;
this.transactionCount = transactionCount;
this.reference = reference;
}
public BlockSummaryData(BlockData blockData) {
@@ -49,6 +62,7 @@ public class BlockSummaryData {
this.timestamp = blockData.getTimestamp();
this.transactionCount = blockData.getTransactionCount();
this.reference = blockData.getReference();
}
// Getters / setters
@@ -65,7 +79,7 @@ public class BlockSummaryData {
return this.minterPublicKey;
}
public int getOnlineAccountsCount() {
public Integer getOnlineAccountsCount() {
return this.onlineAccountsCount;
}
@@ -77,6 +91,10 @@ public class BlockSummaryData {
return this.transactionCount;
}
public byte[] getReference() {
return this.reference;
}
public Integer getMinterLevel() {
return this.minterLevel;
}

View File

@@ -1,7 +1,5 @@
package org.qortal.data.block;
import org.qortal.data.network.PeerChainTipData;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.math.BigInteger;
@@ -14,14 +12,14 @@ public class CommonBlockData {
private BlockSummaryData commonBlockSummary = null;
private List<BlockSummaryData> blockSummariesAfterCommonBlock = null;
private BigInteger chainWeight = null;
private PeerChainTipData chainTipData = null;
private BlockSummaryData chainTipData = null;
// Constructors
protected CommonBlockData() {
}
public CommonBlockData(BlockSummaryData commonBlockSummary, PeerChainTipData chainTipData) {
public CommonBlockData(BlockSummaryData commonBlockSummary, BlockSummaryData chainTipData) {
this.commonBlockSummary = commonBlockSummary;
this.chainTipData = chainTipData;
}
@@ -49,7 +47,7 @@ public class CommonBlockData {
this.chainWeight = chainWeight;
}
public PeerChainTipData getChainTipData() {
public BlockSummaryData getChainTipData() {
return this.chainTipData;
}

View File

@@ -1,37 +0,0 @@
package org.qortal.data.network;
public class PeerChainTipData {
/** Latest block height as reported by peer. */
private Integer lastHeight;
/** Latest block signature as reported by peer. */
private byte[] lastBlockSignature;
/** Latest block timestamp as reported by peer. */
private Long lastBlockTimestamp;
/** Latest block minter public key as reported by peer. */
private byte[] lastBlockMinter;
public PeerChainTipData(Integer lastHeight, byte[] lastBlockSignature, Long lastBlockTimestamp, byte[] lastBlockMinter) {
this.lastHeight = lastHeight;
this.lastBlockSignature = lastBlockSignature;
this.lastBlockTimestamp = lastBlockTimestamp;
this.lastBlockMinter = lastBlockMinter;
}
public Integer getLastHeight() {
return this.lastHeight;
}
public byte[] getLastBlockSignature() {
return this.lastBlockSignature;
}
public Long getLastBlockTimestamp() {
return this.lastBlockTimestamp;
}
public byte[] getLastBlockMinter() {
return this.lastBlockMinter;
}
}

View File

@@ -28,6 +28,9 @@ public class PeerData {
private Long addedWhen;
private String addedBy;
/** The number of consecutive times we failed to sync with this peer */
private int failedSyncCount = 0;
// Constructors
// necessary for JAXB serialization
@@ -92,6 +95,18 @@ public class PeerData {
return this.addedBy;
}
public int getFailedSyncCount() {
return this.failedSyncCount;
}
public void setFailedSyncCount(int failedSyncCount) {
this.failedSyncCount = failedSyncCount;
}
public void incrementFailedSyncCount() {
this.failedSyncCount++;
}
// Pretty peerAddress getter for JAXB
@XmlElement(name = "address")
protected String getPrettyAddress() {

View File

@@ -128,6 +128,10 @@ public abstract class TransactionData {
return this.txGroupId;
}
public void setTxGroupId(int txGroupId) {
this.txGroupId = txGroupId;
}
public byte[] getReference() {
return this.reference;
}

View File

@@ -80,6 +80,9 @@ public class Group {
// Useful constants
public static final int NO_GROUP = 0;
// Null owner address corresponds with public key "11111111111111111111111111111111"
public static String NULL_OWNER_ADDRESS = "QdSnUy6sUiEnaN87dWmE92g1uQjrvPgrWG";
public static final int MIN_NAME_SIZE = 3;
public static final int MAX_NAME_SIZE = 32;
public static final int MAX_DESCRIPTION_SIZE = 128;

View File

@@ -11,6 +11,7 @@ import org.qortal.controller.arbitrary.ArbitraryDataFileListManager;
import org.qortal.controller.arbitrary.ArbitraryDataManager;
import org.qortal.crypto.Crypto;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.network.PeerData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.message.*;
@@ -90,6 +91,8 @@ public class Network {
private static final long DISCONNECTION_CHECK_INTERVAL = 10 * 1000L; // milliseconds
private static final int BROADCAST_CHAIN_TIP_DEPTH = 7; // Just enough to fill a SINGLE TCP packet (~1440 bytes)
// Generate our node keys / ID
private final Ed25519PrivateKeyParameters edPrivateKeyParams = new Ed25519PrivateKeyParameters(new SecureRandom());
private final Ed25519PublicKeyParameters edPublicKeyParams = edPrivateKeyParams.generatePublicKey();
@@ -1087,10 +1090,16 @@ public class Network {
if (peer.isOutbound()) {
if (!Settings.getInstance().isLite()) {
// Send our height
Message heightMessage = buildHeightMessage(peer, Controller.getInstance().getChainTip());
if (!peer.sendMessage(heightMessage)) {
peer.disconnect("failed to send height/info");
// Send our height / chain tip info
Message message = this.buildHeightOrChainTipInfo(peer);
if (message == null) {
peer.disconnect("Couldn't build our chain tip info");
return;
}
if (!peer.sendMessage(message)) {
peer.disconnect("failed to send height / chain tip info");
return;
}
}
@@ -1164,10 +1173,47 @@ public class Network {
return new PeersV2Message(peerAddresses);
}
public Message buildHeightMessage(Peer peer, BlockData blockData) {
// HEIGHT_V2 contains way more useful info
return new HeightV2Message(blockData.getHeight(), blockData.getSignature(),
blockData.getTimestamp(), blockData.getMinterPublicKey());
/** Builds either (legacy) HeightV2Message or (newer) BlockSummariesV2Message, depending on peer version.
*
* @return Message, or null if DataException was thrown.
*/
public Message buildHeightOrChainTipInfo(Peer peer) {
if (peer.getPeersVersion() >= BlockSummariesV2Message.MINIMUM_PEER_VERSION) {
int latestHeight = Controller.getInstance().getChainHeight();
try (final Repository repository = RepositoryManager.getRepository()) {
List<BlockSummaryData> latestBlockSummaries = repository.getBlockRepository().getBlockSummaries(latestHeight - BROADCAST_CHAIN_TIP_DEPTH, latestHeight);
return new BlockSummariesV2Message(latestBlockSummaries);
} catch (DataException e) {
return null;
}
} else {
// For older peers
BlockData latestBlockData = Controller.getInstance().getChainTip();
return new HeightV2Message(latestBlockData.getHeight(), latestBlockData.getSignature(),
latestBlockData.getTimestamp(), latestBlockData.getMinterPublicKey());
}
}
public void broadcastOurChain() {
BlockData latestBlockData = Controller.getInstance().getChainTip();
int latestHeight = latestBlockData.getHeight();
try (final Repository repository = RepositoryManager.getRepository()) {
List<BlockSummaryData> latestBlockSummaries = repository.getBlockRepository().getBlockSummaries(latestHeight - BROADCAST_CHAIN_TIP_DEPTH, latestHeight);
Message latestBlockSummariesMessage = new BlockSummariesV2Message(latestBlockSummaries);
// For older peers
Message heightMessage = new HeightV2Message(latestBlockData.getHeight(), latestBlockData.getSignature(),
latestBlockData.getTimestamp(), latestBlockData.getMinterPublicKey());
Network.getInstance().broadcast(broadcastPeer -> broadcastPeer.getPeersVersion() >= BlockSummariesV2Message.MINIMUM_PEER_VERSION
? latestBlockSummariesMessage
: heightMessage
);
} catch (DataException e) {
LOGGER.warn("Couldn't broadcast our chain tip info", e);
}
}
public Message buildNewTransactionMessage(Peer peer, TransactionData transactionData) {

View File

@@ -6,8 +6,8 @@ import com.google.common.net.InetAddresses;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.CommonBlockData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.network.message.ChallengeMessage;
import org.qortal.network.message.Message;
@@ -148,7 +148,7 @@ public class Peer {
/**
* Latest block info as reported by peer.
*/
private PeerChainTipData peersChainTipData;
private List<BlockSummaryData> peersChainTipData = Collections.emptyList();
/**
* Our common block with this peer
@@ -353,28 +353,34 @@ public class Peer {
}
}
public PeerChainTipData getChainTipData() {
synchronized (this.peerInfoLock) {
return this.peersChainTipData;
}
public BlockSummaryData getChainTipData() {
List<BlockSummaryData> chainTipSummaries = this.peersChainTipData;
if (chainTipSummaries.isEmpty())
return null;
// Return last entry, which should have greatest height
return chainTipSummaries.get(chainTipSummaries.size() - 1);
}
public void setChainTipData(PeerChainTipData chainTipData) {
synchronized (this.peerInfoLock) {
this.peersChainTipData = chainTipData;
}
public void setChainTipData(BlockSummaryData chainTipData) {
this.peersChainTipData = Collections.singletonList(chainTipData);
}
public List<BlockSummaryData> getChainTipSummaries() {
return this.peersChainTipData;
}
public void setChainTipSummaries(List<BlockSummaryData> chainTipSummaries) {
this.peersChainTipData = List.copyOf(chainTipSummaries);
}
public CommonBlockData getCommonBlockData() {
synchronized (this.peerInfoLock) {
return this.commonBlockData;
}
return this.commonBlockData;
}
public void setCommonBlockData(CommonBlockData commonBlockData) {
synchronized (this.peerInfoLock) {
this.commonBlockData = commonBlockData;
}
this.commonBlockData = commonBlockData;
}
public boolean isSyncInProgress() {
@@ -904,20 +910,22 @@ public class Peer {
// Common block data
public boolean canUseCachedCommonBlockData() {
PeerChainTipData peerChainTipData = this.getChainTipData();
CommonBlockData commonBlockData = this.getCommonBlockData();
BlockSummaryData peerChainTipData = this.getChainTipData();
if (peerChainTipData == null || peerChainTipData.getSignature() == null)
return false;
if (peerChainTipData != null && commonBlockData != null) {
PeerChainTipData commonBlockChainTipData = commonBlockData.getChainTipData();
if (peerChainTipData.getLastBlockSignature() != null && commonBlockChainTipData != null
&& commonBlockChainTipData.getLastBlockSignature() != null) {
if (Arrays.equals(peerChainTipData.getLastBlockSignature(),
commonBlockChainTipData.getLastBlockSignature())) {
return true;
}
}
}
return false;
CommonBlockData commonBlockData = this.getCommonBlockData();
if (commonBlockData == null)
return false;
BlockSummaryData commonBlockChainTipData = commonBlockData.getChainTipData();
if (commonBlockChainTipData == null || commonBlockChainTipData.getSignature() == null)
return false;
if (!Arrays.equals(peerChainTipData.getSignature(), commonBlockChainTipData.getSignature()))
return false;
return true;
}

View File

@@ -0,0 +1,109 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.transform.Transformer;
import org.qortal.transform.block.BlockTransformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.BufferUnderflowException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.List;
public class BlockSummariesV2Message extends Message {
public static final long MINIMUM_PEER_VERSION = 0x0300060001L;
private static final int BLOCK_SUMMARY_V2_LENGTH = BlockTransformer.BLOCK_SIGNATURE_LENGTH /* block signature */
+ Transformer.PUBLIC_KEY_LENGTH /* minter public key */
+ Transformer.INT_LENGTH /* online accounts count */
+ Transformer.LONG_LENGTH /* block timestamp */
+ Transformer.INT_LENGTH /* transactions count */
+ BlockTransformer.BLOCK_SIGNATURE_LENGTH; /* block reference */
private List<BlockSummaryData> blockSummaries;
public BlockSummariesV2Message(List<BlockSummaryData> blockSummaries) {
super(MessageType.BLOCK_SUMMARIES_V2);
// Shortcut for when there are no summaries
if (blockSummaries.isEmpty()) {
this.dataBytes = Message.EMPTY_DATA_BYTES;
return;
}
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
try {
// First summary's height
bytes.write(Ints.toByteArray(blockSummaries.get(0).getHeight()));
for (BlockSummaryData blockSummary : blockSummaries) {
bytes.write(blockSummary.getSignature());
bytes.write(blockSummary.getMinterPublicKey());
bytes.write(Ints.toByteArray(blockSummary.getOnlineAccountsCount()));
bytes.write(Longs.toByteArray(blockSummary.getTimestamp()));
bytes.write(Ints.toByteArray(blockSummary.getTransactionCount()));
bytes.write(blockSummary.getReference());
}
} catch (IOException e) {
throw new AssertionError("IOException shouldn't occur with ByteArrayOutputStream");
}
this.dataBytes = bytes.toByteArray();
this.checksumBytes = Message.generateChecksum(this.dataBytes);
}
private BlockSummariesV2Message(int id, List<BlockSummaryData> blockSummaries) {
super(id, MessageType.BLOCK_SUMMARIES_V2);
this.blockSummaries = blockSummaries;
}
public List<BlockSummaryData> getBlockSummaries() {
return this.blockSummaries;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) {
List<BlockSummaryData> blockSummaries = new ArrayList<>();
// If there are no bytes remaining then we can treat this as an empty array of summaries
if (bytes.remaining() == 0)
return new BlockSummariesV2Message(id, blockSummaries);
int height = bytes.getInt();
// Expecting bytes remaining to be exact multiples of BLOCK_SUMMARY_V2_LENGTH
if (bytes.remaining() % BLOCK_SUMMARY_V2_LENGTH != 0)
throw new BufferUnderflowException();
while (bytes.hasRemaining()) {
byte[] signature = new byte[BlockTransformer.BLOCK_SIGNATURE_LENGTH];
bytes.get(signature);
byte[] minterPublicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
bytes.get(minterPublicKey);
int onlineAccountsCount = bytes.getInt();
long timestamp = bytes.getLong();
int transactionsCount = bytes.getInt();
byte[] reference = new byte[BlockTransformer.BLOCK_SIGNATURE_LENGTH];
bytes.get(reference);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, minterPublicKey,
onlineAccountsCount, timestamp, transactionsCount, reference);
blockSummaries.add(blockSummary);
height++;
}
return new BlockSummariesV2Message(id, blockSummaries);
}
}

View File

@@ -0,0 +1,23 @@
package org.qortal.network.message;
import java.nio.ByteBuffer;
public class GenericUnknownMessage extends Message {
public static final long MINIMUM_PEER_VERSION = 0x0300060001L;
public GenericUnknownMessage() {
super(MessageType.GENERIC_UNKNOWN);
this.dataBytes = EMPTY_DATA_BYTES;
}
private GenericUnknownMessage(int id) {
super(id, MessageType.GENERIC_UNKNOWN);
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) {
return new GenericUnknownMessage(id);
}
}

View File

@@ -21,6 +21,7 @@ public enum MessageType {
HEIGHT_V2(10, HeightV2Message::fromByteBuffer),
PING(11, PingMessage::fromByteBuffer),
PONG(12, PongMessage::fromByteBuffer),
GENERIC_UNKNOWN(13, GenericUnknownMessage::fromByteBuffer),
// Requesting data
PEERS_V2(20, PeersV2Message::fromByteBuffer),
@@ -41,6 +42,7 @@ public enum MessageType {
BLOCK_SUMMARIES(70, BlockSummariesMessage::fromByteBuffer),
GET_BLOCK_SUMMARIES(71, GetBlockSummariesMessage::fromByteBuffer),
BLOCK_SUMMARIES_V2(72, BlockSummariesV2Message::fromByteBuffer),
ONLINE_ACCOUNTS(80, OnlineAccountsMessage::fromByteBuffer),
GET_ONLINE_ACCOUNTS(81, GetOnlineAccountsMessage::fromByteBuffer),

View File

@@ -14,7 +14,7 @@ public interface ChatRepository {
* Expects EITHER non-null txGroupID OR non-null sender and recipient addresses.
*/
public List<ChatMessage> getMessagesMatchingCriteria(Long before, Long after,
Integer txGroupId, List<String> involving,
Integer txGroupId, byte[] reference, List<String> involving,
Integer limit, Integer offset, Boolean reverse) throws DataException;
public ChatMessage toChatMessage(ChatTransactionData chatTransactionData) throws DataException;

View File

@@ -143,13 +143,17 @@ public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
byte[] blockMinterPublicKey = resultSet.getBytes(3);
// Fetch additional info from the archive itself
int onlineAccountsCount = 0;
Integer onlineAccountsCount = null;
Long timestamp = null;
Integer transactionCount = null;
byte[] reference = null;
BlockData blockData = this.fromSignature(signature);
if (blockData != null) {
onlineAccountsCount = blockData.getOnlineAccountsCount();
}
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount, timestamp, transactionCount, reference);
blockSummaries.add(blockSummary);
} while (resultSet.next());

View File

@@ -297,7 +297,7 @@ public class HSQLDBBlockRepository implements BlockRepository {
@Override
public List<BlockSummaryData> getBlockSummariesBySigner(byte[] signerPublicKey, Integer limit, Integer offset, Boolean reverse) throws DataException {
StringBuilder sql = new StringBuilder(512);
sql.append("SELECT signature, height, Blocks.minter, online_accounts_count FROM ");
sql.append("SELECT signature, height, Blocks.minter, online_accounts_count, minted_when, transaction_count, Blocks.reference FROM ");
// List of minter account's public key and reward-share public keys with minter's public key
sql.append("(SELECT * FROM (VALUES (CAST(? AS QortalPublicKey))) UNION (SELECT reward_share_public_key FROM RewardShares WHERE minter_public_key = ?)) AS PublicKeys (public_key) ");
@@ -322,8 +322,12 @@ public class HSQLDBBlockRepository implements BlockRepository {
int height = resultSet.getInt(2);
byte[] blockMinterPublicKey = resultSet.getBytes(3);
int onlineAccountsCount = resultSet.getInt(4);
long timestamp = resultSet.getLong(5);
int transactionCount = resultSet.getInt(6);
byte[] reference = resultSet.getBytes(7);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount,
timestamp, transactionCount, reference);
blockSummaries.add(blockSummary);
} while (resultSet.next());
@@ -355,7 +359,7 @@ public class HSQLDBBlockRepository implements BlockRepository {
@Override
public List<BlockSummaryData> getBlockSummaries(int firstBlockHeight, int lastBlockHeight) throws DataException {
String sql = "SELECT signature, height, minter, online_accounts_count, minted_when, transaction_count "
String sql = "SELECT signature, height, minter, online_accounts_count, minted_when, transaction_count, reference "
+ "FROM Blocks WHERE height BETWEEN ? AND ?";
List<BlockSummaryData> blockSummaries = new ArrayList<>();
@@ -371,9 +375,10 @@ public class HSQLDBBlockRepository implements BlockRepository {
int onlineAccountsCount = resultSet.getInt(4);
long timestamp = resultSet.getLong(5);
int transactionCount = resultSet.getInt(6);
byte[] reference = resultSet.getBytes(7);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, minterPublicKey, onlineAccountsCount,
timestamp, transactionCount);
timestamp, transactionCount, reference);
blockSummaries.add(blockSummary);
} while (resultSet.next());

View File

@@ -23,7 +23,7 @@ public class HSQLDBChatRepository implements ChatRepository {
}
@Override
public List<ChatMessage> getMessagesMatchingCriteria(Long before, Long after, Integer txGroupId,
public List<ChatMessage> getMessagesMatchingCriteria(Long before, Long after, Integer txGroupId, byte[] referenceBytes,
List<String> involving, Integer limit, Integer offset, Boolean reverse)
throws DataException {
// Check args meet expectations
@@ -57,6 +57,11 @@ public class HSQLDBChatRepository implements ChatRepository {
bindParams.add(after);
}
if (referenceBytes != null) {
whereClauses.add("reference = ?");
bindParams.add(referenceBytes);
}
if (txGroupId != null) {
whereClauses.add("tx_group_id = " + txGroupId); // int safe to use literally
whereClauses.add("recipient IS NULL");

View File

@@ -184,6 +184,8 @@ public class Settings {
// Peer-to-peer related
private boolean isTestNet = false;
/** Single node testnet mode */
private boolean singleNodeTestnet = false;
/** Port number for inbound peer-to-peer connections. */
private Integer listenPort;
/** Whether to attempt to open the listen port via UPnP */
@@ -203,8 +205,11 @@ public class Settings {
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
private int maxRetries = 2;
/** The number of seconds of no activity before recovery mode begins */
public long recoveryModeTimeout = 10 * 60 * 1000L;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "3.3.7";
private String minPeerVersion = "3.6.3";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */
@@ -290,10 +295,6 @@ public class Settings {
/** Additional offset added to values returned by NTP.getTime() */
private Long testNtpOffset = null;
// Online accounts
/** Whether to opt-in to mempow computations for online accounts, ahead of general release */
private boolean onlineAccountsMemPoWEnabled = false;
/* Foreign chains */
@@ -490,7 +491,7 @@ public class Settings {
private void validate() {
// Validation goes here
if (this.minBlockchainPeers < 1)
if (this.minBlockchainPeers < 1 && !singleNodeTestnet)
throwValidationError("minBlockchainPeers must be at least 1");
if (this.apiKey != null && this.apiKey.trim().length() < 8)
@@ -647,6 +648,10 @@ public class Settings {
return this.isTestNet;
}
public boolean isSingleNodeTestnet() {
return this.singleNodeTestnet;
}
public int getListenPort() {
if (this.listenPort != null)
return this.listenPort;
@@ -667,6 +672,9 @@ public class Settings {
}
public int getMinBlockchainPeers() {
if (singleNodeTestnet)
return 0;
return this.minBlockchainPeers;
}
@@ -692,6 +700,10 @@ public class Settings {
public int getMaxRetries() { return this.maxRetries; }
public long getRecoveryModeTimeout() {
return recoveryModeTimeout;
}
public String getMinPeerVersion() { return this.minPeerVersion; }
public boolean getAllowConnectionsWithOlderPeerVersions() { return this.allowConnectionsWithOlderPeerVersions; }
@@ -800,10 +812,6 @@ public class Settings {
return this.testNtpOffset;
}
public boolean isOnlineAccountsMemPoWEnabled() {
return this.onlineAccountsMemPoWEnabled;
}
public long getRepositoryBackupInterval() {
return this.repositoryBackupInterval;
}

View File

@@ -2,6 +2,7 @@ package org.qortal.transaction;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
@@ -64,15 +65,24 @@ public class AddGroupAdminTransaction extends Transaction {
Account owner = getOwner();
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// Check transaction's public key matches group's current owner
if (!owner.getAddress().equals(groupOwner))
// Require approval if transaction relates to a group owned by the null account
if (groupOwnedByNullAccount && !this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
// Check transaction's public key matches group's current owner (except for groups owned by the null account)
if (!groupOwnedByNullAccount && !owner.getAddress().equals(groupOwner))
return ValidationResult.INVALID_GROUP_OWNER;
// Check address is a group member
if (!this.repository.getGroupRepository().memberExists(groupId, memberAddress))
return ValidationResult.NOT_GROUP_MEMBER;
// Check transaction creator is a group member
if (!this.repository.getGroupRepository().memberExists(groupId, this.getCreator().getAddress()))
return ValidationResult.NOT_GROUP_MEMBER;
// Check group member is not already an admin
if (this.repository.getGroupRepository().adminExists(groupId, memberAddress))
return ValidationResult.ALREADY_GROUP_ADMIN;

View File

@@ -2,6 +2,7 @@ package org.qortal.transaction;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
@@ -65,11 +66,21 @@ public class RemoveGroupAdminTransaction extends Transaction {
return ValidationResult.GROUP_DOES_NOT_EXIST;
Account owner = getOwner();
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// Check transaction's public key matches group's current owner
if (!owner.getAddress().equals(groupData.getOwner()))
// Require approval if transaction relates to a group owned by the null account
if (groupOwnedByNullAccount && !this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
// Check transaction's public key matches group's current owner (except for groups owned by the null account)
if (!groupOwnedByNullAccount && !owner.getAddress().equals(groupOwner))
return ValidationResult.INVALID_GROUP_OWNER;
// Check transaction creator is a group member
if (!this.repository.getGroupRepository().memberExists(groupId, this.getCreator().getAddress()))
return ValidationResult.NOT_GROUP_MEMBER;
Account admin = getAdmin();
// Check member is an admin

View File

@@ -1,13 +1,7 @@
package org.qortal.transaction;
import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.EnumSet;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Predicate;
@@ -69,8 +63,8 @@ public abstract class Transaction {
AT(21, false),
CREATE_GROUP(22, true),
UPDATE_GROUP(23, true),
ADD_GROUP_ADMIN(24, false),
REMOVE_GROUP_ADMIN(25, false),
ADD_GROUP_ADMIN(24, true),
REMOVE_GROUP_ADMIN(25, true),
GROUP_BAN(26, false),
CANCEL_GROUP_BAN(27, false),
GROUP_KICK(28, false),
@@ -250,6 +244,7 @@ public abstract class Transaction {
INVALID_TIMESTAMP_SIGNATURE(95),
ADDRESS_BLOCKED(96),
NAME_BLOCKED(97),
GROUP_APPROVAL_REQUIRED(98),
INVALID_BUT_OK(999),
NOT_YET_RELEASED(1000);
@@ -760,9 +755,13 @@ public abstract class Transaction {
// Group no longer exists? Possibly due to blockchain orphaning undoing group creation?
return true; // stops tx being included in block but it will eventually expire
String groupOwner = this.repository.getGroupRepository().getOwner(txGroupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// If transaction's creator is group admin (of group with ID txGroupId) then auto-approve
// This is disabled for null-owned groups, since these require approval from other admins
PublicKeyAccount creator = this.getCreator();
if (groupRepository.adminExists(txGroupId, creator.getAddress()))
if (!groupOwnedByNullAccount && groupRepository.adminExists(txGroupId, creator.getAddress()))
return false;
return true;

View File

@@ -235,7 +235,7 @@ public class BlockTransformer extends Transformer {
// Online accounts timestamp is only present if there are also signatures
onlineAccountsTimestamp = byteBuffer.getLong();
final int signaturesByteLength = getOnlineAccountSignaturesLength(onlineAccountsSignaturesCount, onlineAccountsCount, timestamp);
final int signaturesByteLength = (onlineAccountsSignaturesCount * Transformer.SIGNATURE_LENGTH) + (onlineAccountsCount * INT_LENGTH);
if (signaturesByteLength > BlockChain.getInstance().getMaxBlockSize())
throw new TransformationException("Byte data too long for online accounts signatures");
@@ -511,16 +511,6 @@ public class BlockTransformer extends Transformer {
return nonces;
}
public static int getOnlineAccountSignaturesLength(int onlineAccountsSignaturesCount, int onlineAccountCount, long blockTimestamp) {
if (blockTimestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp()) {
// Once mempow is active, we expect the online account signatures to be appended with the nonce values
return (onlineAccountsSignaturesCount * Transformer.SIGNATURE_LENGTH) + (onlineAccountCount * INT_LENGTH);
}
else {
// Before mempow, only the online account signatures were included (which will likely be a single signature)
return onlineAccountsSignaturesCount * Transformer.SIGNATURE_LENGTH;
}
}
public static byte[] extract(byte[] input, int pos, int length) {
byte[] output = new byte[length];

View File

@@ -24,7 +24,6 @@
"onlineAccountSignaturesMinLifetime": 43200000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 1659801600000,
"onlineAccountsMemoryPoWTimestamp": 9999999999999,
"rewardsByHeight": [
{ "height": 1, "reward": 5.00 },
{ "height": 259201, "reward": 4.75 },
@@ -80,7 +79,8 @@
"calcChainWeightTimestamp": 1620579600000,
"transactionV5Timestamp": 1642176000000,
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 1655222400000
"disableReferenceTimestamp": 1655222400000,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Automatisches Update
BLOCK_HEIGHT = height
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Build-Version
CHECK_TIME_ACCURACY = Prüfe Zeitgenauigkeit

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Auto Update
BLOCK_HEIGHT = height
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Build version
CHECK_TIME_ACCURACY = Check time accuracy

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Actualización automática
BLOCK_HEIGHT = altura
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Versión de compilación
CHECK_TIME_ACCURACY = Comprobar la precisión del tiempo

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Automaattinen päivitys
BLOCK_HEIGHT = korkeus
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Versio
CHECK_TIME_ACCURACY = Tarkista ajan tarkkuus

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Mise à jour automatique
BLOCK_HEIGHT = hauteur
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Numéro de version
CHECK_TIME_ACCURACY = Vérifier l'heure

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Automatikus Frissítés
BLOCK_HEIGHT = blokkmagasság
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Verzió
CHECK_TIME_ACCURACY = Óra pontosságának ellenőrzése

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Aggiornamento automatico
BLOCK_HEIGHT = altezza
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Versione
CHECK_TIME_ACCURACY = Controlla la precisione dell'ora

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = 자동 업데이트
BLOCK_HEIGHT = 높이
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = 빌드 버전
CHECK_TIME_ACCURACY = 시간 정확도 점검

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Automatische Update
BLOCK_HEIGHT = Block hoogte
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Versie nummer
CHECK_TIME_ACCURACY = Controleer accuraatheid van de tijd

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Actualizare automata
BLOCK_HEIGHT = dimensiune
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = versiunea compilatiei
CHECK_TIME_ACCURACY = verificare exactitate ora

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Автоматическое обновление
BLOCK_HEIGHT = Высота блока
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Версия сборки
CHECK_TIME_ACCURACY = Проверка точного времени

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = Automatisk uppdatering
BLOCK_HEIGHT = höjd
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = Byggversion
CHECK_TIME_ACCURACY = Kontrollera tidens noggrannhet

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = 自动更新
BLOCK_HEIGHT = 区块高度
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = 版本
CHECK_TIME_ACCURACY = 检查时间准确性

View File

@@ -7,6 +7,8 @@ AUTO_UPDATE = 自動更新
BLOCK_HEIGHT = 區塊高度
BLOCKS_REMAINING = blocks remaining
BUILD_VERSION = 版本
CHECK_TIME_ACCURACY = 檢查時間準確性

View File

@@ -102,77 +102,77 @@ public class ArbitraryServiceTests extends Common {
}
@Test
public void testValidQortalMetadata() throws IOException {
// Metadata is to describe an arbitrary resource (title, description, tags, etc)
String dataString = "{\"title\":\"Test Title\", \"description\":\"Test description\", \"tags\":[\"test\"]}";
public void testValidateGifRepository() throws IOException {
// Generate some random data
byte[] data = new byte[1024];
new Random().nextBytes(data);
// Write to temp path
Path path = Files.createTempFile("testValidQortalMetadata", null);
// Write the data to several files in a temp path
Path path = Files.createTempDirectory("testValidateGifRepository");
path.toFile().deleteOnExit();
Files.write(path, dataString.getBytes(), StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image1.gif"), data, StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image2.gif"), data, StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image3.gif"), data, StandardOpenOption.CREATE);
Service service = Service.QORTAL_METADATA;
Service service = Service.GIF_REPOSITORY;
assertTrue(service.isValidationRequired());
// There is an index file in the root
assertEquals(ValidationResult.OK, service.validate(path));
}
@Test
public void testQortalMetadataMissingKeys() throws IOException {
// Metadata is to describe an arbitrary resource (title, description, tags, etc)
String dataString = "{\"description\":\"Test description\", \"tags\":[\"test\"]}";
public void testValidateMultiLayerGifRepository() throws IOException {
// Generate some random data
byte[] data = new byte[1024];
new Random().nextBytes(data);
// Write to temp path
Path path = Files.createTempFile("testQortalMetadataMissingKeys", null);
// Write the data to several files in a temp path
Path path = Files.createTempDirectory("testValidateMultiLayerGifRepository");
path.toFile().deleteOnExit();
Files.write(path, dataString.getBytes(), StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image1.gif"), data, StandardOpenOption.CREATE);
Service service = Service.QORTAL_METADATA;
Path subdirectory = Paths.get(path.toString(), "subdirectory");
Files.createDirectories(subdirectory);
Files.write(Paths.get(subdirectory.toString(), "image2.gif"), data, StandardOpenOption.CREATE);
Files.write(Paths.get(subdirectory.toString(), "image3.gif"), data, StandardOpenOption.CREATE);
Service service = Service.GIF_REPOSITORY;
assertTrue(service.isValidationRequired());
assertEquals(ValidationResult.MISSING_KEYS, service.validate(path));
// There is an index file in the root
assertEquals(ValidationResult.DIRECTORIES_NOT_ALLOWED, service.validate(path));
}
@Test
public void testQortalMetadataTooLarge() throws IOException {
// Metadata is to describe an arbitrary resource (title, description, tags, etc)
String dataString = "{\"title\":\"Test Title\", \"description\":\"Test description\", \"tags\":[\"test\"]}";
public void testValidateEmptyGifRepository() throws IOException {
Path path = Files.createTempDirectory("testValidateEmptyGifRepository");
// Generate some large data to go along with it
int largeDataSize = 11*1024; // Larger than allowed 10kiB
byte[] largeData = new byte[largeDataSize];
new Random().nextBytes(largeData);
// Write to temp path
Path path = Files.createTempDirectory("testQortalMetadataTooLarge");
path.toFile().deleteOnExit();
Files.write(Paths.get(path.toString(), "data"), dataString.getBytes(), StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "large_data"), largeData, StandardOpenOption.CREATE);
Service service = Service.QORTAL_METADATA;
Service service = Service.GIF_REPOSITORY;
assertTrue(service.isValidationRequired());
assertEquals(ValidationResult.EXCEEDS_SIZE_LIMIT, service.validate(path));
// There is an index file in the root
assertEquals(ValidationResult.MISSING_DATA, service.validate(path));
}
@Test
public void testMultipleFileMetadata() throws IOException {
// Metadata is to describe an arbitrary resource (title, description, tags, etc)
String dataString = "{\"title\":\"Test Title\", \"description\":\"Test description\", \"tags\":[\"test\"]}";
public void testValidateInvalidGifRepository() throws IOException {
// Generate some random data
byte[] data = new byte[1024];
new Random().nextBytes(data);
// Generate some large data to go along with it
int otherDataSize = 1024; // Smaller than 10kiB limit
byte[] otherData = new byte[otherDataSize];
new Random().nextBytes(otherData);
// Write to temp path
Path path = Files.createTempDirectory("testMultipleFileMetadata");
// Write the data to several files in a temp path
Path path = Files.createTempDirectory("testValidateInvalidGifRepository");
path.toFile().deleteOnExit();
Files.write(Paths.get(path.toString(), "data"), dataString.getBytes(), StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "other_data"), otherData, StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image1.gif"), data, StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image2.gif"), data, StandardOpenOption.CREATE);
Files.write(Paths.get(path.toString(), "image3.jpg"), data, StandardOpenOption.CREATE); // Invalid extension
Service service = Service.QORTAL_METADATA;
Service service = Service.GIF_REPOSITORY;
assertTrue(service.isValidationRequired());
// There are multiple files, so we don't know which one to parse as JSON
assertEquals(ValidationResult.MISSING_KEYS, service.validate(path));
// There is an index file in the root
assertEquals(ValidationResult.INVALID_FILE_EXTENSION, service.validate(path));
}
}

View File

@@ -124,8 +124,6 @@ public class AccountUtils {
long timestamp = System.currentTimeMillis();
byte[] timestampBytes = Longs.toByteArray(timestamp);
final boolean mempowActive = timestamp >= BlockChain.getInstance().getOnlineAccountsMemoryPoWTimestamp();
for (int a = 0; a < numAccounts; ++a) {
byte[] privateKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
SECURE_RANDOM.nextBytes(privateKey);
@@ -135,7 +133,7 @@ public class AccountUtils {
byte[] signature = signForAggregation(privateKey, timestampBytes);
Integer nonce = mempowActive ? new Random().nextInt(500000) : null;
Integer nonce = new Random().nextInt(500000);
onlineAccounts.add(new OnlineAccountData(timestamp, signature, publicKey, nonce));
}

View File

@@ -0,0 +1,388 @@
package org.qortal.test.group;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.data.transaction.*;
import org.qortal.group.Group;
import org.qortal.group.Group.ApprovalThreshold;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.GroupUtils;
import org.qortal.test.common.TransactionUtils;
import org.qortal.test.common.transaction.TestTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.Base58;
import static org.junit.Assert.*;
/**
* Dev group admin tests
*
* The dev group (ID 1) is owned by the null account with public key 11111111111111111111111111111111
* To regain access to otherwise blocked owner-based rules, it has different validation logic
* which applies to groups with this same null owner.
*
* The main difference is that approval is required for certain transaction types relating to
* null-owned groups. This allows existing admins to approve updates to the group (using group's
* approval threshold) instead of these actions being performed by the owner.
*
* Since these apply to all null-owned groups, this allows anyone to update their group to
* the null owner if they want to take advantage of this decentralized approval system.
*
* Currently, the affected transaction types are:
* - AddGroupAdminTransaction
* - RemoveGroupAdminTransaction
*
* This same approach could ultimately be applied to other group transactions too.
*/
public class DevGroupAdminTests extends Common {
private static final int DEV_GROUP_ID = 1;
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
}
@After
public void afterTest() throws DataException {
Common.orphanCheck();
}
@Test
public void testGroupKickMember() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
PrivateKeyAccount bob = Common.getTestAccount(repository, "bob");
// Dev group
int groupId = DEV_GROUP_ID;
// Confirm Bob is not a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Attempt to kick Bob
ValidationResult result = groupKick(repository, alice, groupId, bob.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Alice to invite Bob, as it's a closed group
groupInvite(repository, alice, groupId, bob.getAddress(), 3600);
// Bob to join
joinGroup(repository, bob, groupId);
// Confirm Bob now a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
// Attempt to kick Bob
result = groupKick(repository, alice, groupId, bob.getAddress());
// Should be OK
assertEquals(ValidationResult.OK, result);
// Confirm Bob no longer a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Orphan last block
BlockUtils.orphanLastBlock(repository);
// Confirm Bob now a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
}
}
@Test
public void testGroupKickAdmin() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
PrivateKeyAccount bob = Common.getTestAccount(repository, "bob");
// Dev group
int groupId = DEV_GROUP_ID;
// Confirm Bob is not a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Alice to invite Bob, as it's a closed group
groupInvite(repository, alice, groupId, bob.getAddress(), 3600);
// Bob to join
joinGroup(repository, bob, groupId);
// Confirm Bob now a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
// Promote Bob to admin
TransactionData addGroupAdminTransactionData = addGroupAdmin(repository, alice, groupId, bob.getAddress());
// Confirm transaction needs approval, and hasn't been approved
Transaction.ApprovalStatus approvalStatus = GroupUtils.getApprovalStatus(repository, addGroupAdminTransactionData.getSignature());
assertEquals("incorrect transaction approval status", Transaction.ApprovalStatus.PENDING, approvalStatus);
// Have Alice approve Bob's approval-needed transaction
GroupUtils.approveTransaction(repository, "alice", addGroupAdminTransactionData.getSignature(), true);
// Mint a block so that the transaction becomes approved
BlockUtils.mintBlock(repository);
// Confirm transaction is approved
approvalStatus = GroupUtils.getApprovalStatus(repository, addGroupAdminTransactionData.getSignature());
assertEquals("incorrect transaction approval status", Transaction.ApprovalStatus.APPROVED, approvalStatus);
// Confirm Bob is now admin
assertTrue(isAdmin(repository, bob.getAddress(), groupId));
// Attempt to kick Bob
ValidationResult result = groupKick(repository, alice, groupId, bob.getAddress());
// Shouldn't be allowed
assertEquals(ValidationResult.INVALID_GROUP_OWNER, result);
// Confirm Bob is still a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
// Confirm Bob still an admin
assertTrue(isAdmin(repository, bob.getAddress(), groupId));
// Orphan last block
BlockUtils.orphanLastBlock(repository);
// Confirm Bob no longer an admin (ADD_GROUP_ADMIN no longer approved)
assertFalse(isAdmin(repository, bob.getAddress(), groupId));
// Have Alice try to kick herself!
result = groupKick(repository, alice, groupId, alice.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Have Bob try to kick Alice
result = groupKick(repository, bob, groupId, alice.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
}
}
@Test
public void testGroupBanMember() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
PrivateKeyAccount bob = Common.getTestAccount(repository, "bob");
// Dev group
int groupId = DEV_GROUP_ID;
// Confirm Bob is not a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Attempt to cancel non-existent Bob ban
ValidationResult result = cancelGroupBan(repository, alice, groupId, bob.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Attempt to ban Bob
result = groupBan(repository, alice, groupId, bob.getAddress());
// Should be OK
assertEquals(ValidationResult.OK, result);
// Bob attempts to rejoin
result = joinGroup(repository, bob, groupId);
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Orphan last block (Bob ban)
BlockUtils.orphanLastBlock(repository);
// Delete unconfirmed group-ban transaction
TransactionUtils.deleteUnconfirmedTransactions(repository);
// Confirm Bob is not a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Alice to invite Bob, as it's a closed group
groupInvite(repository, alice, groupId, bob.getAddress(), 3600);
// Bob to join
result = joinGroup(repository, bob, groupId);
// Should be OK
assertEquals(ValidationResult.OK, result);
// Confirm Bob now a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
// Attempt to ban Bob
result = groupBan(repository, alice, groupId, bob.getAddress());
// Should be OK
assertEquals(ValidationResult.OK, result);
// Confirm Bob no longer a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Bob attempts to rejoin
result = joinGroup(repository, bob, groupId);
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Cancel Bob's ban
result = cancelGroupBan(repository, alice, groupId, bob.getAddress());
// Should be OK
assertEquals(ValidationResult.OK, result);
// Bob attempts to rejoin
result = joinGroup(repository, bob, groupId);
// Should be OK
assertEquals(ValidationResult.OK, result);
// Orphan last block (Bob join)
BlockUtils.orphanLastBlock(repository);
// Delete unconfirmed join-group transaction
TransactionUtils.deleteUnconfirmedTransactions(repository);
// Orphan last block (Cancel Bob ban)
BlockUtils.orphanLastBlock(repository);
// Delete unconfirmed cancel-ban transaction
TransactionUtils.deleteUnconfirmedTransactions(repository);
// Bob attempts to rejoin
result = joinGroup(repository, bob, groupId);
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Orphan last block (Bob ban)
BlockUtils.orphanLastBlock(repository);
// Delete unconfirmed group-ban transaction
TransactionUtils.deleteUnconfirmedTransactions(repository);
// Confirm Bob now a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
}
}
@Test
public void testGroupBanAdmin() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
PrivateKeyAccount bob = Common.getTestAccount(repository, "bob");
// Dev group
int groupId = DEV_GROUP_ID;
// Confirm Bob is not a member
assertFalse(isMember(repository, bob.getAddress(), groupId));
// Alice to invite Bob, as it's a closed group
groupInvite(repository, alice, groupId, bob.getAddress(), 3600);
// Bob to join
ValidationResult result = joinGroup(repository, bob, groupId);
// Should be OK
assertEquals(ValidationResult.OK, result);
// Promote Bob to admin
TransactionData addGroupAdminTransactionData = addGroupAdmin(repository, alice, groupId, bob.getAddress());
// Confirm transaction needs approval, and hasn't been approved
Transaction.ApprovalStatus approvalStatus = GroupUtils.getApprovalStatus(repository, addGroupAdminTransactionData.getSignature());
assertEquals("incorrect transaction approval status", Transaction.ApprovalStatus.PENDING, approvalStatus);
// Have Alice approve Bob's approval-needed transaction
GroupUtils.approveTransaction(repository, "alice", addGroupAdminTransactionData.getSignature(), true);
// Mint a block so that the transaction becomes approved
BlockUtils.mintBlock(repository);
// Confirm transaction is approved
approvalStatus = GroupUtils.getApprovalStatus(repository, addGroupAdminTransactionData.getSignature());
assertEquals("incorrect transaction approval status", Transaction.ApprovalStatus.APPROVED, approvalStatus);
// Confirm Bob is now admin
assertTrue(isAdmin(repository, bob.getAddress(), groupId));
// Attempt to ban Bob
result = groupBan(repository, alice, groupId, bob.getAddress());
// .. but we can't, because Bob is an admin and the group has no owner
assertEquals(ValidationResult.INVALID_GROUP_OWNER, result);
// Confirm Bob still a member
assertTrue(isMember(repository, bob.getAddress(), groupId));
// ... and still an admin
assertTrue(isAdmin(repository, bob.getAddress(), groupId));
// Have Alice try to ban herself!
result = groupBan(repository, alice, groupId, alice.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
// Have Bob try to ban Alice
result = groupBan(repository, bob, groupId, alice.getAddress());
// Should NOT be OK
assertNotSame(ValidationResult.OK, result);
}
}
private ValidationResult joinGroup(Repository repository, PrivateKeyAccount joiner, int groupId) throws DataException {
JoinGroupTransactionData transactionData = new JoinGroupTransactionData(TestTransaction.generateBase(joiner), groupId);
ValidationResult result = TransactionUtils.signAndImport(repository, transactionData, joiner);
if (result == ValidationResult.OK)
BlockUtils.mintBlock(repository);
return result;
}
private void groupInvite(Repository repository, PrivateKeyAccount admin, int groupId, String invitee, int timeToLive) throws DataException {
GroupInviteTransactionData transactionData = new GroupInviteTransactionData(TestTransaction.generateBase(admin), groupId, invitee, timeToLive);
TransactionUtils.signAndMint(repository, transactionData, admin);
}
private ValidationResult groupKick(Repository repository, PrivateKeyAccount admin, int groupId, String member) throws DataException {
GroupKickTransactionData transactionData = new GroupKickTransactionData(TestTransaction.generateBase(admin), groupId, member, "testing");
ValidationResult result = TransactionUtils.signAndImport(repository, transactionData, admin);
if (result == ValidationResult.OK)
BlockUtils.mintBlock(repository);
return result;
}
private ValidationResult groupBan(Repository repository, PrivateKeyAccount admin, int groupId, String member) throws DataException {
GroupBanTransactionData transactionData = new GroupBanTransactionData(TestTransaction.generateBase(admin), groupId, member, "testing", 0);
ValidationResult result = TransactionUtils.signAndImport(repository, transactionData, admin);
if (result == ValidationResult.OK)
BlockUtils.mintBlock(repository);
return result;
}
private ValidationResult cancelGroupBan(Repository repository, PrivateKeyAccount admin, int groupId, String member) throws DataException {
CancelGroupBanTransactionData transactionData = new CancelGroupBanTransactionData(TestTransaction.generateBase(admin), groupId, member);
ValidationResult result = TransactionUtils.signAndImport(repository, transactionData, admin);
if (result == ValidationResult.OK)
BlockUtils.mintBlock(repository);
return result;
}
private TransactionData addGroupAdmin(Repository repository, PrivateKeyAccount owner, int groupId, String member) throws DataException {
AddGroupAdminTransactionData transactionData = new AddGroupAdminTransactionData(TestTransaction.generateBase(owner), groupId, member);
transactionData.setTxGroupId(groupId);
TransactionUtils.signAndMint(repository, transactionData, owner);
return transactionData;
}
private boolean isMember(Repository repository, String address, int groupId) throws DataException {
return repository.getGroupRepository().memberExists(groupId, address);
}
private boolean isAdmin(Repository repository, String address, int groupId) throws DataException {
return repository.getGroupRepository().adminExists(groupId, address);
}
}

View File

@@ -69,7 +69,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -72,7 +72,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 0
"disableReferenceTimestamp": 0,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -74,7 +74,8 @@
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999,
"aggregateSignatureTimestamp": 0
"aggregateSignatureTimestamp": 0,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"newConsensusTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,

View File

@@ -73,7 +73,8 @@
"calcChainWeightTimestamp": 0,
"transactionV5Timestamp": 0,
"transactionV6Timestamp": 0,
"disableReferenceTimestamp": 9999999999999
"disableReferenceTimestamp": 9999999999999,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999
},
"genesisInfo": {
"version": 4,
@@ -90,6 +91,8 @@
{ "type": "CREATE_GROUP", "creatorPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupName": "dev-group", "description": "developer group", "isOpen": false, "approvalThreshold": "PCT100", "minimumBlockDelay": 0, "maximumBlockDelay": 1440 },
{ "type": "UPDATE_GROUP", "ownerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "groupId": 1, "newOwner": "QdSnUy6sUiEnaN87dWmE92g1uQjrvPgrWG", "newDescription": "developer group", "newIsOpen": false, "newApprovalThreshold": "PCT40", "minimumBlockDelay": 10, "maximumBlockDelay": 1440 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "TEST", "description": "test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "C6wuddsBV3HzRrXUtezE7P5MoRXp5m3mEDokRDGZB6ry", "assetName": "OTHER", "description": "other test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },
{ "type": "ISSUE_ASSET", "issuerPublicKey": "2tiMr5LTpaWCgbRvkPK8TFd7k63DyHJMMFFsz9uBf1ZP", "assetName": "GOLD", "description": "gold test asset", "data": "", "quantity": "1000000", "isDivisible": true, "fee": 0 },

View File

@@ -0,0 +1,97 @@
#!/usr/bin/env bash
port=12391
if [ $# -gt 0 -a "$1" = "-t" ]; then
port=62391
fi
printf "Searching for auto-update transactions to approve...\n";
tx=$( curl --silent --url "http://localhost:${port}/transactions/search?txGroupId=1&txType=ADD_GROUP_ADMIN&txType=REMOVE_GROUP_ADMIN&confirmationStatus=CONFIRMED&limit=1&reverse=true" );
if fgrep --silent '"approvalStatus":"PENDING"' <<< "${tx}"; then
true
else
echo "Can't find any pending transactions"
exit
fi
sig=$( perl -n -e 'print $1 if m/"signature":"(\w+)"/' <<< "${tx}" )
if [ -z "${sig}" ]; then
printf "Can't find transaction signature in JSON:\n%s\n" "${tx}"
exit
fi
printf "Found transaction %s\n" $sig;
printf "\nPaste your dev account private key:\n";
IFS=
read -s privkey
printf "\n"
# Convert to public key
pubkey=$( curl --silent --url "http://localhost:${port}/utils/publickey" --data @- <<< "${privkey}" );
if egrep -v --silent '^\w{44,46}$' <<< "${pubkey}"; then
printf "Invalid response from API - was your private key correct?\n%s\n" "${pubkey}"
exit
fi
printf "Your public key: %s\n" ${pubkey}
# Convert to address
address=$( curl --silent --url "http://localhost:${port}/addresses/convert/${pubkey}" );
printf "Your address: %s\n" ${address}
# Grab last reference
lastref=$( curl --silent --url "http://localhost:${port}/addresses/lastreference/{$address}" );
printf "Your last reference: %s\n" ${lastref}
# Build GROUP_APPROVAL transaction
timestamp=$( date +%s )000
tx_json=$( cat <<TX_END
{
"timestamp": ${timestamp},
"reference": "${lastref}",
"fee": 0.001,
"txGroupId": 0,
"adminPublicKey": "${pubkey}",
"pendingSignature": "${sig}",
"approval": true
}
TX_END
)
raw_tx=$( curl --silent --header "Content-Type: application/json" --url "http://localhost:${port}/groups/approval" --data @- <<< "${tx_json}" )
if egrep -v --silent '^\w{100,}' <<< "${raw_tx}"; then
printf "Building GROUP_APPROVAL transaction failed:\n%s\n" "${raw_tx}"
exit
fi
printf "\nRaw approval tx:\n%s\n" ${raw_tx}
# sign
sign_json=$( cat <<SIGN_END
{
"privateKey": "${privkey}",
"transactionBytes": "${raw_tx}"
}
SIGN_END
)
signed_tx=$( curl --silent --header "Content-Type: application/json" --url "http://localhost:${port}/transactions/sign" --data @- <<< "${sign_json}" )
printf "\nSigned tx:\n%s\n" ${signed_tx}
if egrep -v --silent '^\w{100,}' <<< "${signed_tx}"; then
printf "Signing GROUP_APPROVAL transaction failed:\n%s\n" "${signed_tx}"
exit
fi
# ready to publish?
plural="s"
printf "\n"
for ((seconds = 5; seconds > 0; seconds--)); do
if [ "${seconds}" = "1" ]; then
plural=""
fi
printf "\rBroadcasting in %d second%s...(CTRL-C) to abort " $seconds $plural
sleep 1
done
printf "\rBroadcasting signed GROUP_APPROVAL transaction... \n"
result=$( curl --silent --url "http://localhost:${port}/transactions/process" --data @- <<< "${signed_tx}" )
printf "API response:\n%s\n" "${result}"

View File

@@ -58,6 +58,9 @@ git show HEAD:log4j2.properties > ${build_dir}/log4j2.properties
git show HEAD:start.sh > ${build_dir}/start.sh
git show HEAD:stop.sh > ${build_dir}/stop.sh
chmod +x ${build_dir}/start.sh
chmod +x ${build_dir}/stop.sh
printf "{\n}\n" > ${build_dir}/settings.json
gtouch -d ${commit_ts%%+??:??} ${build_dir} ${build_dir}/*

View File

@@ -71,9 +71,14 @@ our %TRANSACTION_TYPES = (
},
add_group_admin => {
url => 'groups/addadmin',
required => [qw(groupId member)],
required => [qw(groupId txGroupId member)],
key_name => 'ownerPublicKey',
},
remove_group_admin => {
url => 'groups/removeadmin',
required => [qw(groupId txGroupId admin)],
key_name => 'ownerPublicKey',
},
group_approval => {
url => 'groups/approval',
required => [qw(pendingSignature approval)],