Compare commits

...

88 Commits

Author SHA1 Message Date
CalDescent
fe474b4507 Bump version to 3.2.3 2022-03-19 20:44:41 +00:00
CalDescent
bbe15b563c Added unit test to simulate recent issue.
This fails with the 3.2.2 code but now passes when using the latest fixes.
2022-03-19 20:41:38 +00:00
CalDescent
59025b8f47 Revert "Add Qortal AT FunctionCodes for getting account level / blocks minted + tests"
This reverts commit eb9b94b9c6.
2022-03-19 19:52:14 +00:00
CalDescent
1b42c5edb1 Fixed NPE in runIntegrityCheck()
This feature is disabled by default so can be tidied up later. For now, the unhandled scenario is logged and the checking continues on.

One name's transactions are too complex for the current integrity check code to verify (MangoSalsa), but it has been verified manually. All other names pass the automated test.
2022-03-19 19:22:16 +00:00
CalDescent
362335913d Fixed infinite loop in name rebuilding.
If an account is renamed and then at some point renamed back to one of the original names, it confused the names rebuilding code. The current solution is to track the linked names that have already been rebuilt, and then break out of the loop once a name is encountered a second time.
2022-03-19 18:55:19 +00:00
CalDescent
4340dac595 Fixed recently introduced issue in name rebuilding code causing transactions to be unordered.
This is the likely cause of inconsistent name entries across different nodes, as we can't guarantee that every environment will return the same transaction order from the SQL queries.
2022-03-19 18:44:16 +00:00
CalDescent
f3e1fc884c Merge pull request #63 from catbref/master
Add Qortal AT FunctionCodes for getting account level / blocks minted
2022-03-19 11:32:39 +00:00
CalDescent
39c06d8817 Merge pull request #75 from catbref/name-unicode
Unicode / NAME updates.
2022-03-19 11:32:22 +00:00
CalDescent
91cee36c21 Catch and log all exceptions in addStatusToResources()
Some users are seeing 500 errors deriving from this code. This should hopefully allow more info to be obtained, as well as causing it to omit the status for resources that encounter problems.
2022-03-19 11:08:42 +00:00
CalDescent
6bef883942 Removed OpenJDK 11 reference in build-release.sh, as it seems that checksums will not match by default due to timestamps and file orderings.
See: https://dzone.com/articles/reproducible-builds-in-java
2022-03-19 11:05:51 +00:00
CalDescent
25ba2406c0 Updated AdvancedInstaller project for v3.2.2 2022-03-16 19:53:22 +00:00
CalDescent
e4dc8f85a7 Bump version to 3.2.2 2022-03-15 19:57:02 +00:00
CalDescent
12a4a260c8 Handle new sync result case. 2022-03-14 22:04:11 +00:00
CalDescent
268f02b5c3 Added automated test to ensure that the core's default bootstrap hosts are functional.
Whilst not strictly a unit test, this should allow issues with the core's bootstrap servers to be caught quickly.
2022-03-14 21:52:54 +00:00
CalDescent
13eff43b87 Fixed synchronizer issues which caused large re-orgs
Peers without a recent block are removed at the start of the sync process, however, due to the time lag involved in fetching block summaries and comparing the list of peers, some of these could subsequently drop back to a non-recent block and still be chosen as the next peer to sync with. The end result being that nodes could unnecessarily orphan as many as 20 blocks due to syncing with a peer that doesn't have a recent block (but has a couple of high weight blocks after the common block).

This commit adds some additional filtering to avoid this situation.

1) Peers without a recent block are removed as candidates in comparePeers(), allowing for alternate peers to be chosen.
2) After comparePeers() completes, the list is filtered a second time to make sure that all are still recent.
3) Finally, the peer's state is checked one last time in syncToPeerChain(), just before any orphaning takes place.

Whilst just one of the above would probably have been sufficient, the consequences of this bug are so severe that it makes sense to be very thorough.

The only exception to the above is when the node is in "recovery mode", in which case peers without recent blocks are allowed to be included. Items 1 and 3 above do not apply in recovery mode. Item 2 does apply, since the entire comparePeers() functionality is already skipped in a recovery situation due to our chain being out of date.
2022-03-14 21:47:37 +00:00
catbref
e604a19bce Unicode / NAME updates.
Fix UPDATE_NAME not processing empty 'newName' transactions correctly.
Fix some emoji code-points not being processed correctly.
Updated tests.
Now included ICU4J v70.1 - WARNING: this could add around 10MB to JAR size!
Bumped homoglyph to v1.2.1.
2022-03-14 08:45:32 +00:00
CalDescent
e63e39fe9a Updated AdvancedInstaller project for v3.2.1 2022-03-13 19:39:58 +00:00
CalDescent
584c951824 Bump version to 3.2.1 2022-03-13 18:53:54 +00:00
CalDescent
f0d9982ee4 Made arbitraryDataFileHashResponses final, and use .sort() rather than .stream() to avoid new instance creation. 2022-03-12 09:43:56 +00:00
CalDescent
c65de74d13 Revert "Synchronize arbitrary data list removals, as it seems that SynchronizedList and SynchronizedMap aren't overriding removeIf() with a thread-safe version."
This reverts commit e5f88fe2f4.
2022-03-12 09:40:13 +00:00
CalDescent
df0a9701ba Improved logging in onNetworkGetArbitraryDataFileListMessage() 2022-03-11 16:51:19 +00:00
CalDescent
4ec7b1ff1e Removed time estimations that are no longer correct or relevant. 2022-03-11 16:50:34 +00:00
CalDescent
7d3a465386 Including the number of hashes (even if zero) is now required in GetArbitraryDataFileListMessage, to allow for additional fields. Enough peers should have updated by now for this to be ok. 2022-03-11 16:50:11 +00:00
CalDescent
30347900d9 Tidied up one last place that was accessing immutableConnectedPeers directly. This makes no difference, but helps with code consistency. 2022-03-11 15:28:54 +00:00
CalDescent
e5f88fe2f4 Synchronize arbitrary data list removals, as it seems that SynchronizedList and SynchronizedMap aren't overriding removeIf() with a thread-safe version. 2022-03-11 15:22:34 +00:00
CalDescent
0d0ccfd0ac Small refactor for code simplicity. 2022-03-11 15:11:07 +00:00
CalDescent
9013d11d24 Report as 100% synced if the latest block is within the last 30 mins
This should hopefully reduce confusion due to APIs reporting 99% synced even though up to date. The systray should never show this since it already treats blocks in the last 30 mins as synced.
2022-03-11 14:53:10 +00:00
CalDescent
fc5672a161 Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading systray statuses.
Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
2022-03-11 14:49:02 +00:00
CalDescent
221c3629e4 Don't refetch the file list if the fileListCache is empty, since an empty list now means that there are likely to be no files available on disk. 2022-03-11 13:08:37 +00:00
CalDescent
76fc56f1c9 Fetch the file list in getFilenameForHeight() if needed. 2022-03-11 13:07:16 +00:00
CalDescent
8e59aa2885 Peer getter methods renamed to include "immutable", for consistency with underlying lists and also to make it clearer to the callers. 2022-03-11 13:00:47 +00:00
CalDescent
0738dbd613 Avoid direct access to this.connectedPeers, as we need to use the immutable copy. 2022-03-11 12:58:11 +00:00
CalDescent
196ecffaf3 Skip calls to this.logger.trace() in ExecuteProduceConsume.run() if trace logging isn't enabled.
This could very slightly reduce load due to skipping the internal filtering inside log4j. Given that this method is causing major problems with CPU at times, I'm trying to make it as optimized as possible.
2022-03-11 11:59:18 +00:00
CalDescent
a0fedbd4b0 Implemented suggestions from catbref to avoid potential thread safety issue in peer arrays. 2022-03-11 11:27:13 +00:00
CalDescent
7c47e22000 Set fileListCache to null when invalidating. 2022-03-11 11:01:29 +00:00
CalDescent
6aad6a1618 fileListCache is now an immutable Map, which is thread safe. Thanks to catbref for this idea. 2022-03-11 10:59:07 +00:00
CalDescent
b764172500 Revert "Hopeful fix for ConcurrentModificationException in BlockArchiveReader.getFilenameForHeight()"
This reverts commit a12ae8ad24.
2022-03-11 10:55:22 +00:00
CalDescent
c185d79672 Loop through all available direct peer connections and try each one in turn.
Also added some extra conditionals to avoid repeated attempts with the same port.
2022-03-09 20:55:27 +00:00
CalDescent
76b8ba91dd Only add an entry to directConnectionInfo if one with this peer-signature combination doesn't already exist. 2022-03-09 20:50:03 +00:00
CalDescent
0418c831e6 Direct connections with peers now prefer those with the highest number of chunks for a resource. Once a connection has been attempted with a peer, remove it from the list so that it isn't attempted again in the same round. 2022-03-09 20:15:26 +00:00
CalDescent
4078f94caa Modified GetArbitraryDataFileListMessage to allow requesting peer's address to be optionally included.
This can ultimately be used to notify the serving peer to expect a direct connection from the requesting peer (to allow it to temporarily bypass maxConnections for long enough for the files to be retrieved). Or it could even possibly be used to trigger a reverse connection (from the serving peer to the requesting peer).
2022-03-09 19:58:02 +00:00
CalDescent
a12ae8ad24 Hopeful fix for ConcurrentModificationException in BlockArchiveReader.getFilenameForHeight() 2022-03-09 19:46:50 +00:00
CalDescent
498ca29aab Wait until a successful connection with a peer before tracking the direct request. 2022-03-08 23:07:08 +00:00
CalDescent
ba70e457b6 Default chunk size reduced from 1MB to 0.5MB 2022-03-08 22:44:43 +00:00
CalDescent
d62808fe1d Don't attempt to create the data directory every time an ArbitraryDataFile instance is instantiated. This was using excessive amounts of CPU and disk I/O. 2022-03-08 22:42:07 +00:00
CalDescent
6c14b79dfb Removed bootstrap host that is no longer functional. 2022-03-08 22:30:01 +00:00
CalDescent
631a253bcc Added support for dark theme in loading screen. 2022-03-08 22:29:37 +00:00
CalDescent
4cb63100d3 Drop the ArbitraryPeers table as it's no longer needed 2022-03-06 13:01:09 +00:00
CalDescent
42fcee0cfd Removed all code that interfaced with the ArbitraryPeers table 2022-03-06 13:00:11 +00:00
CalDescent
829a2e937b Removed all arbitrary signature broadcasts 2022-03-06 12:58:01 +00:00
CalDescent
5d7e5e8e59 Dropped support of ARBITRARY_SIGNATURES message handling, as this feature has been superseded by the peerAddress in file list requests. 2022-03-06 12:46:06 +00:00
CalDescent
6f0a0ef324 Small refactor 2022-03-06 12:42:19 +00:00
CalDescent
f7fe91abeb sendOurOnlineAccountsInfo() moved to its own thread, in preparation for mempow 2022-03-06 12:41:54 +00:00
CalDescent
7252e8d160 Deleted presence tests, as they are no longer relevant, and aren't easily adaptable to the new approach. 2022-03-06 12:03:18 +00:00
CalDescent
2630c35f8c Chunk validation now uses MAX_CHUNK_SIZE rather than CHUNK_SIZE, to allow for a smaller CHUNK_SIZE value to be optionally used, without failing the validation of existing resources. 2022-03-06 11:43:28 +00:00
CalDescent
49f466c073 Added missing break; 2022-03-06 11:21:55 +00:00
CalDescent
c198f785e6 Added significant CPU optimizations to ArbitraryDataManager
- Slow down loops that query the db
- Check for new metadata every 5 minutes instead of constantly
- Check for new data every 1 minute instead of constantly

This could be further improved in the future by having block.process() notify the ArbitraryDataManager that there is new data to process. This would avoid the need for the frequent checks/loops, and only a single complete sweep would be needed on node startup (as long as failures are then retried). But I will avoid this additional complexity for now.
2022-03-06 11:21:39 +00:00
CalDescent
5be093dafc Fix for "Synchronizing null%" systray bug introduced in 3.2.0 2022-03-06 11:00:53 +00:00
CalDescent
2c33d5256c Added code accidentally missed out of commit 1b036b7 2022-03-05 20:44:01 +00:00
CalDescent
4448e2b5df Handle case when metadata isn't returned. 2022-03-05 17:39:13 +00:00
CalDescent
146d234dec Additional defensiveness in ArbitraryDataFile.fromHash() to avoid similar future bugs. 2022-03-05 17:25:48 +00:00
CalDescent
18d5c924e6 Fixed bug cased by fetchAllMetadata() 2022-03-05 17:25:14 +00:00
CalDescent
b520838195 Increased default maxNetworkThreadPoolSize from 20 to 32
This will hopefully offset some of the additional network demands from arbitrary data requests.
2022-03-05 17:24:55 +00:00
CalDescent
1b036b763c Major CPU optimization to block minter
Load sorted list of reward share public keys into memory, so that the indexes can be obtained. This is around 100x faster than querying each index separately (and the savings will increase as more keys are added).

For 4150 reward share keys, it was taking around 5000ms to query individually, vs 50ms using this approach.

The main trade off is that these 4150 keys require around 130kB of additional memory when minting (and this will increase proportionally with more minters). However, this one query was often accounting for 50% of the entire core's CPU usage, so the additional memory usage seems insignificant by comparison.

To gain confidence, I ran both old and new approaches side by side, and confirmed that the indexes matched exactly.
2022-03-05 16:10:43 +00:00
CalDescent
8545a8bf0d Automatically fetch metadata for all resources that have it. 2022-03-05 13:00:49 +00:00
CalDescent
f0136a5018 Include the external port when responding ArbitraryDataFileListRequests 2022-03-05 13:00:17 +00:00
CalDescent
6697b3376b Direct peer connections now use the on-demand data retrieved from file list requests, rather than the stale and incomplete ArbitraryPeerData. 2022-03-05 12:59:13 +00:00
CalDescent
ea785f79b8 Removed unnecessary synchronization 2022-03-04 19:02:30 +00:00
CalDescent
0352a09de7 New online accounts are now verified on the OnlineAccountsManager thread rather than on network threads. This is an attempt to reduce the amount of blocked network threads due to signature verification, and is necessary for the upcoming mempow addition. 2022-03-04 17:58:06 +00:00
CalDescent
5b4f15ab2e Transaction importing code moved to TransactionImporter controller class
As with online accounts, no logic changes other than moving transaction queue processing from the controller thread to its own dedicated thread.
2022-03-04 16:47:21 +00:00
CalDescent
fd37c2b76b Moved all online accounts code to a new OnlineAccountsManager controller class
There are no logic changes here other than moving performOnlineAccountsTasks() onto its own thread, so that it's not subject to anything that might be slowing down the main controller thread.
2022-03-04 16:24:04 +00:00
CalDescent
924aa05681 Optimized peer lists
- Removed synchronization from connectedPeers, and replaced it with an unmodifiableList.
- Added additional immuatable caches: handshakedPeers and outboundHandshakedPeers

This should greatly reduce the amount of time spent waiting around for access to the connectedPeers array, since it is now immediately accessible without needing to obtain a lock. It also removes calls to stream() which were consuming large amounts of CPU to constantly filter the connected peers down to a list of handshaked peers.

Thanks to @catbref for these great suggestions.
2022-03-04 15:14:12 +00:00
CalDescent
84b42210f1 Use ArbitraryDataFileRequestThreads only - instead of reusing file list response threads. 2022-03-04 13:34:16 +00:00
CalDescent
941080c395 Rework of arbitraryDataFileHashResponses to use a list rather than a map (limited to 1000) items. Sort the list by routes with the lowest number of peer hops first, to try and prioritize those which are easiest and quickest to reach. 2022-03-04 13:33:17 +00:00
CalDescent
35d9a10cf4 Avoid logging if there are no remaining transaction signatures to validate. There was too much log spam, none of which was particularly useful. 2022-03-04 12:03:58 +00:00
CalDescent
7c181379b4 Added more granularity to logging, to differentiate between signature validation and general processing/importing, as well as showing counts of the transactions being processed in each round. 2022-03-04 11:12:23 +00:00
CalDescent
f9576d8afb Further optimizations to Controller.processIncomingTransactionsQueue()
- Signature validation is now able to run concurrently with synchronization, to reduce the chances of the queue building up, and to speed up the propagation of new transactions. There's no need to break out of the loop - or avoid looping in the first place - since signatures can be validated without holding the blockchain lock.
- A blockchain lock isn't even attempted if a sync request is pending.
2022-03-04 11:05:58 +00:00
CalDescent
6a8a113fa1 Merge pull request #74 from catbref/presence-txns-removal
PRESENCE transactions changed to always fail signature validation
2022-03-04 10:33:11 +00:00
CalDescent
ef59c34165 Added missing "break" which was causing additional unnecessary debug logging. Originally introduced due to a merge conflict with the metadata branch. 2022-03-04 10:28:44 +00:00
CalDescent
a19e1f06c0 Merge pull request #73 from catbref/incoming-txns-rework
Reworking of Controller.processIncomingTransactionsQueue()
2022-03-04 09:45:29 +00:00
catbref
a9371f0a90 In Controller.processIncomingTransactionsQueue(), don't bother with 2nd-phase of locking blockchain and importing if there are no valid signature transactions to actually import 2022-03-03 20:32:27 +00:00
catbref
a7a94e49e8 PRESENCE transactions changed to always fail signature validation 2022-03-03 20:25:58 +00:00
catbref
affd100298 Reworking of Controller.processIncomingTransactionsQueue()
Main changes are:
* Check transaction signature validity in initial round, without blockchain lock
* Convert List of incoming transactions to Map so we can record whether we have validated transaction signature before to save rechecking effort
* Add invalid signature transactions to invalidUnconfirmedTransactions map with INVALID_TRANSACTION_RECHECK_INTERVAL expiry (~60min)
* Other minor changes related to List->Map change and Java object synchronization
2022-03-03 20:21:04 +00:00
CalDescent
fd6ec301a4 Updated AdvancedInstaller project for v3.2.0 2022-03-03 20:02:30 +00:00
CalDescent
5666e6084b Bump version to 3.2.0 2022-03-02 20:04:49 +00:00
CalDescent
69309c437e Tightened up the content security policy for non HTML files. 2022-03-01 20:36:34 +00:00
CalDescent
e392e4d344 Allow eval(), setTimeout(), etc, to enable various QDN sites to function correctly. The existing sandboxing should be locking this down enough already. Limited to .html and .htm files only. 2022-03-01 20:35:56 +00:00
catbref
eb9b94b9c6 Add Qortal AT FunctionCodes for getting account level / blocks minted + tests 2021-12-04 16:36:05 +00:00
51 changed files with 1939 additions and 1594 deletions

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{5FC8DCC3-BF9C-4D72-8C6D-940340ACD1B8} 1049:{1DEF14AB-2397-4517-B3C8-13221B921753} 2052:{B9E3C1DF-C92D-440A-9A21-869582F8585F} 2057:{91D69E7B-CA7D-4449-8E8A-F22DCEA546FC} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{FA630CAE-BC27-4ED9-8331-D6DB024F9EAD} 1049:{248A0A43-83A9-44AB-8E4A-74D32A1343CF} 2052:{89EF02B5-73A2-4B73-A18F-BF94F986AD32} 2057:{08C62724-5123-4F13-ACF4-F9358D7BFE98} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="3.1.1" Type="32"/>
<ROW Property="ProductVersion" Value="3.2.2" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{42F5EC19-E46F-4299-B9F7-6E1112F6E4FB}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{E8A0872C-FDE4-49FD-9A83-8584F8D487D1}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>

17
pom.xml
View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>3.1.1</version>
<version>3.2.3</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
@@ -21,6 +21,8 @@
<dagger.version>1.2.2</dagger.version>
<guava.version>28.1-jre</guava.version>
<hsqldb.version>2.5.1</hsqldb.version>
<homoglyph.version>1.2.1</homoglyph.version>
<icu4j.version>70.1</icu4j.version>
<upnp.version>1.1</upnp.version>
<jersey.version>2.29.1</jersey.version>
<jetty.version>9.4.29.v20200521</jetty.version>
@@ -568,7 +570,18 @@
<dependency>
<groupId>net.codebox</groupId>
<artifactId>homoglyph</artifactId>
<version>1.2.0</version>
<version>${homoglyph.version}</version>
</dependency>
<!-- Unicode support -->
<dependency>
<groupId>com.ibm.icu</groupId>
<artifactId>icu4j</artifactId>
<version>${icu4j.version}</version>
</dependency>
<dependency>
<groupId>com.ibm.icu</groupId>
<artifactId>icu4j-charset</artifactId>
<version>${icu4j.version}</version>
</dependency>
<!-- Jetty -->
<dependency>

View File

@@ -24,9 +24,9 @@ public class NodeStatus {
this.isMintingPossible = Controller.getInstance().isMintingPossible();
this.syncPercent = Synchronizer.getInstance().getSyncPercent();
this.isSynchronizing = this.syncPercent != null;
this.isSynchronizing = Synchronizer.getInstance().isSynchronizing();
this.numberOfConnections = Network.getInstance().getHandshakedPeers().size();
this.numberOfConnections = Network.getInstance().getImmutableHandshakedPeers().size();
this.height = Controller.getInstance().getChainHeight();
}

View File

@@ -30,7 +30,7 @@ import org.qortal.api.Security;
import org.qortal.api.model.ApiOnlineAccount;
import org.qortal.api.model.RewardShareKeyRequest;
import org.qortal.asset.Asset;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.account.RewardShareData;
@@ -156,7 +156,7 @@ public class AddressesResource {
)
@ApiErrors({ApiError.PUBLIC_KEY_NOT_FOUND, ApiError.REPOSITORY_ISSUE})
public List<ApiOnlineAccount> getOnlineAccounts() {
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
// Map OnlineAccountData entries to OnlineAccount via reward-share data
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -191,7 +191,7 @@ public class AddressesResource {
)
@ApiErrors({ApiError.PUBLIC_KEY_NOT_FOUND, ApiError.REPOSITORY_ISSUE})
public List<OnlineAccountLevel> getOnlineAccountsByLevel() {
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
try (final Repository repository = RepositoryManager.getRepository()) {
List<OnlineAccountLevel> onlineAccountLevels = new ArrayList<>();

View File

@@ -35,7 +35,6 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.appender.RollingFileAppender;
import org.checkerframework.checker.units.qual.A;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.*;
@@ -514,7 +513,7 @@ public class AdminResource {
PeerAddress peerAddress = PeerAddress.fromString(targetPeerAddress);
InetSocketAddress resolvedAddress = peerAddress.toSocketAddress();
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
Peer targetPeer = peers.stream().filter(peer -> peer.getResolvedAddress().equals(resolvedAddress)).findFirst().orElse(null);
if (targetPeer == null)

View File

@@ -1267,13 +1267,19 @@ public class ArbitraryResource {
// Determine and add the status of each resource
List<ArbitraryResourceInfo> updatedResources = new ArrayList<>();
for (ArbitraryResourceInfo resourceInfo : resources) {
ArbitraryDataResource resource = new ArbitraryDataResource(resourceInfo.name, ResourceIdType.NAME,
resourceInfo.service, resourceInfo.identifier);
ArbitraryResourceStatus status = resource.getStatus(true);
if (status != null) {
resourceInfo.status = status;
try {
ArbitraryDataResource resource = new ArbitraryDataResource(resourceInfo.name, ResourceIdType.NAME,
resourceInfo.service, resourceInfo.identifier);
ArbitraryResourceStatus status = resource.getStatus(true);
if (status != null) {
resourceInfo.status = status;
}
updatedResources.add(resourceInfo);
} catch (Exception e) {
// Catch and log all exceptions, since some systems are experiencing 500 errors when including statuses
LOGGER.info("Caught exception when adding status to resource %s: %s", resourceInfo, e.toString());
}
updatedResources.add(resourceInfo);
}
return updatedResources;
}

View File

@@ -61,7 +61,7 @@ public class PeersResource {
}
)
public List<ConnectedPeer> getPeers() {
return Network.getInstance().getConnectedPeers().stream().map(ConnectedPeer::new).collect(Collectors.toList());
return Network.getInstance().getImmutableConnectedPeers().stream().map(ConnectedPeer::new).collect(Collectors.toList());
}
@GET
@@ -304,7 +304,7 @@ public class PeersResource {
PeerAddress peerAddress = PeerAddress.fromString(targetPeerAddress);
InetSocketAddress resolvedAddress = peerAddress.toSocketAddress();
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
Peer targetPeer = peers.stream().filter(peer -> peer.getResolvedAddress().equals(resolvedAddress)).findFirst().orElse(null);
if (targetPeer == null)
@@ -352,7 +352,7 @@ public class PeersResource {
public PeersSummary peersSummary() {
PeersSummary peersSummary = new PeersSummary();
List<Peer> connectedPeers = Network.getInstance().getConnectedPeers().stream().collect(Collectors.toList());
List<Peer> connectedPeers = Network.getInstance().getImmutableConnectedPeers().stream().collect(Collectors.toList());
for (Peer peer : connectedPeers) {
if (!peer.isOutbound()) {
peersSummary.inboundConnections++;

View File

@@ -138,34 +138,38 @@ public class RenderResource {
@GET
@Path("/signature/{signature}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexBySignature(@PathParam("signature") String signature) {
public HttpServletResponse getIndexBySignature(@PathParam("signature") String signature,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, signature, Service.WEBSITE, null);
return this.get(signature, ResourceIdType.SIGNATURE, null, "/", null, "/render/signature", true, true);
return this.get(signature, ResourceIdType.SIGNATURE, null, "/", null, "/render/signature", true, true, theme);
}
@GET
@Path("/signature/{signature}/{path:.*}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathBySignature(@PathParam("signature") String signature, @PathParam("path") String inPath) {
public HttpServletResponse getPathBySignature(@PathParam("signature") String signature, @PathParam("path") String inPath,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, signature, Service.WEBSITE, null);
return this.get(signature, ResourceIdType.SIGNATURE, null, inPath,null, "/render/signature", true, true);
return this.get(signature, ResourceIdType.SIGNATURE, null, inPath,null, "/render/signature", true, true, theme);
}
@GET
@Path("/hash/{hash}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexByHash(@PathParam("hash") String hash58, @QueryParam("secret") String secret58) {
public HttpServletResponse getIndexByHash(@PathParam("hash") String hash58, @QueryParam("secret") String secret58,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, hash58, Service.WEBSITE, null);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, "/", secret58, "/render/hash", true, false);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, "/", secret58, "/render/hash", true, false, theme);
}
@GET
@Path("/hash/{hash}/{path:.*}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathByHash(@PathParam("hash") String hash58, @PathParam("path") String inPath,
@QueryParam("secret") String secret58) {
@QueryParam("secret") String secret58,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, hash58, Service.WEBSITE, null);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, inPath, secret58, "/render/hash", true, false);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, inPath, secret58, "/render/hash", true, false, theme);
}
@GET
@@ -173,29 +177,35 @@ public class RenderResource {
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathByName(@PathParam("service") Service service,
@PathParam("name") String name,
@PathParam("path") String inPath) {
@PathParam("path") String inPath,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, name, service, null);
String prefix = String.format("/render/%s", service);
return this.get(name, ResourceIdType.NAME, service, inPath, null, prefix, true, true);
return this.get(name, ResourceIdType.NAME, service, inPath, null, prefix, true, true, theme);
}
@GET
@Path("{service}/{name}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexByName(@PathParam("service") Service service,
@PathParam("name") String name) {
@PathParam("name") String name,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, name, service, null);
String prefix = String.format("/render/%s", service);
return this.get(name, ResourceIdType.NAME, service, "/", null, prefix, true, true);
return this.get(name, ResourceIdType.NAME, service, "/", null, prefix, true, true, theme);
}
private HttpServletResponse get(String resourceId, ResourceIdType resourceIdType, Service service, String inPath,
String secret58, String prefix, boolean usePrefix, boolean async) {
String secret58, String prefix, boolean usePrefix, boolean async, String theme) {
ArbitraryDataRenderer renderer = new ArbitraryDataRenderer(resourceId, resourceIdType, service, inPath,
secret58, prefix, usePrefix, async, request, response, context);
if (theme != null) {
renderer.setTheme(theme);
}
return renderer.render();
}

View File

@@ -53,7 +53,8 @@ public class ArbitraryDataFile {
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFile.class);
public static final long MAX_FILE_SIZE = 500 * 1024 * 1024; // 500MiB
public static final int CHUNK_SIZE = 1 * 1024 * 1024; // 1MiB
protected static final int MAX_CHUNK_SIZE = 1 * 1024 * 1024; // 1MiB
public static final int CHUNK_SIZE = 512 * 1024; // 0.5MiB
public static int SHORT_DIGEST_LENGTH = 8;
protected Path filePath;
@@ -72,7 +73,6 @@ public class ArbitraryDataFile {
}
public ArbitraryDataFile(String hash58, byte[] signature) throws DataException {
this.createDataDirectory();
this.filePath = ArbitraryDataFile.getOutputFilePath(hash58, signature, false);
this.chunks = new ArrayList<>();
this.hash58 = hash58;
@@ -110,6 +110,9 @@ public class ArbitraryDataFile {
}
public static ArbitraryDataFile fromHash(byte[] hash, byte[] signature) throws DataException {
if (hash == null) {
return null;
}
return ArbitraryDataFile.fromHash58(Base58.encode(hash), signature);
}
@@ -146,19 +149,6 @@ public class ArbitraryDataFile {
return ArbitraryDataFile.fromPath(Paths.get(file.getPath()), signature);
}
private boolean createDataDirectory() {
// Create the data directory if it doesn't exist
String dataPath = Settings.getInstance().getDataPath();
Path dataDirectory = Paths.get(dataPath);
try {
Files.createDirectories(dataDirectory);
} catch (IOException e) {
LOGGER.error("Unable to create data directory");
return false;
}
return true;
}
private Path copyToDataDirectory(Path sourcePath, byte[] signature) throws DataException {
if (this.hash58 == null || this.filePath == null) {
return null;

View File

@@ -40,8 +40,8 @@ public class ArbitraryDataFileChunk extends ArbitraryDataFile {
try {
// Validate the file size (chunks have stricter limits)
long fileSize = Files.size(this.filePath);
if (fileSize > CHUNK_SIZE) {
LOGGER.error(String.format("DataFileChunk is too large: %d bytes (max chunk size: %d bytes)", fileSize, CHUNK_SIZE));
if (fileSize > MAX_CHUNK_SIZE) {
LOGGER.error(String.format("DataFileChunk is too large: %d bytes (max chunk size: %d bytes)", fileSize, MAX_CHUNK_SIZE));
return ValidationResult.FILE_TOO_LARGE;
}

View File

@@ -34,6 +34,7 @@ public class ArbitraryDataRenderer {
private final String resourceId;
private final ResourceIdType resourceIdType;
private final Service service;
private String theme = "light";
private String inPath;
private final String secret58;
private final String prefix;
@@ -77,7 +78,7 @@ public class ArbitraryDataRenderer {
// If async is requested, show a loading screen whilst build is in progress
if (async) {
arbitraryDataReader.loadAsynchronously(false, 10);
return this.getLoadingResponse(service, resourceId);
return this.getLoadingResponse(service, resourceId, theme);
}
// Otherwise, loop until we have data
@@ -119,7 +120,7 @@ public class ArbitraryDataRenderer {
byte[] data = Files.readAllBytes(Paths.get(filePath)); // TODO: limit file size that can be read into memory
HTMLParser htmlParser = new HTMLParser(resourceId, inPath, prefix, usePrefix, data);
htmlParser.addAdditionalHeaderTags();
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline'; media-src 'self' blob:");
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob:");
response.setContentType(context.getMimeType(filename));
response.setContentLength(htmlParser.getData().length);
response.getOutputStream().write(htmlParser.getData());
@@ -128,7 +129,7 @@ public class ArbitraryDataRenderer {
// Regular file - can be streamed directly
File file = new File(filePath);
FileInputStream inputStream = new FileInputStream(file);
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline'; media-src 'self' blob:");
response.addHeader("Content-Security-Policy", "default-src 'self'");
response.setContentType(context.getMimeType(filename));
int bytesRead, length = 0;
byte[] buffer = new byte[10240];
@@ -171,7 +172,7 @@ public class ArbitraryDataRenderer {
return userPath;
}
private HttpServletResponse getLoadingResponse(Service service, String name) {
private HttpServletResponse getLoadingResponse(Service service, String name, String theme) {
String responseString = "";
URL url = Resources.getResource("loading/index.html");
try {
@@ -180,6 +181,7 @@ public class ArbitraryDataRenderer {
// Replace vars
responseString = responseString.replace("%%SERVICE%%", service.toString());
responseString = responseString.replace("%%NAME%%", name);
responseString = responseString.replace("%%THEME%%", theme);
} catch (IOException e) {
LOGGER.info("Unable to show loading screen: {}", e.getMessage());
@@ -210,4 +212,8 @@ public class ArbitraryDataRenderer {
return indexFiles;
}
public void setTheme(String theme) {
this.theme = theme;
}
}

View File

@@ -8,13 +8,7 @@ import java.math.BigInteger;
import java.math.RoundingMode;
import java.text.DecimalFormat;
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.Level;
@@ -28,7 +22,7 @@ import org.qortal.asset.Asset;
import org.qortal.at.AT;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.block.BlockChain.AccountLevelShareBin;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
@@ -320,7 +314,7 @@ public class Block {
byte[] reference = parentBlockData.getSignature();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
return null;
@@ -333,6 +327,11 @@ public class Block {
onlineAccountsTimestamp = onlineAccountData.getTimestamp();
}
// Load sorted list of reward share public keys into memory, so that the indexes can be obtained.
// This is up to 100x faster than querying each index separately. For 4150 reward share keys, it
// was taking around 5000ms to query individually, vs 50ms using this approach.
List<byte[]> allRewardSharePublicKeys = repository.getAccountRepository().getRewardSharePublicKeys();
// Map using index into sorted list of reward-shares as key
Map<Integer, OnlineAccountData> indexedOnlineAccounts = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
@@ -340,7 +339,7 @@ public class Block {
if (onlineAccountData.getTimestamp() != onlineAccountsTimestamp)
continue;
Integer accountIndex = repository.getAccountRepository().getRewardShareIndex(onlineAccountData.getPublicKey());
Integer accountIndex = getRewardShareIndex(onlineAccountData.getPublicKey(), allRewardSharePublicKeys);
if (accountIndex == null)
// Online account (reward-share) with current timestamp but reward-share cancelled
continue;
@@ -988,10 +987,10 @@ public class Block {
byte[] onlineTimestampBytes = Longs.toByteArray(onlineTimestamp);
// If this block is much older than current online timestamp, then there's no point checking current online accounts
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - Controller.ONLINE_TIMESTAMP_MODULUS
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS
? null
: Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = Controller.getInstance().getLatestBlocksOnlineAccounts();
: OnlineAccountsManager.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = OnlineAccountsManager.getInstance().getLatestBlocksOnlineAccounts();
// Extract online accounts' timestamp signatures from block data
List<byte[]> onlineAccountsSignatures = BlockTransformer.decodeTimestampSignatures(this.blockData.getOnlineAccountsSignatures());
@@ -1369,7 +1368,7 @@ public class Block {
postBlockTidy();
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
Controller.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
OnlineAccountsManager.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
@@ -1588,7 +1587,7 @@ public class Block {
postBlockTidy();
// Remove any cached, valid online accounts data from Controller
Controller.getInstance().popLatestBlocksOnlineAccounts();
OnlineAccountsManager.getInstance().popLatestBlocksOnlineAccounts();
}
protected void orphanTransactionsFromBlock() throws DataException {
@@ -2029,6 +2028,26 @@ public class Block {
this.repository.getAccountRepository().tidy();
}
// Utils
/**
* Find index of rewardSharePublicKey in list of rewardSharePublicKeys
*
* @param rewardSharePublicKey - the key to query
* @param rewardSharePublicKeys - a sorted list of keys
* @return - the index of the key, or null if not found
*/
private static Integer getRewardShareIndex(byte[] rewardSharePublicKey, List<byte[]> rewardSharePublicKeys) {
int index = 0;
for (byte[] publicKey : rewardSharePublicKeys) {
if (Arrays.equals(rewardSharePublicKey, publicKey)) {
return index;
}
index++;
}
return null;
}
private void logDebugInfo() {
try {
// Avoid calculations if possible. We have to check against INFO here, since Level.isMoreSpecificThan() confusingly uses <= rather than just <

View File

@@ -110,7 +110,7 @@ public class BlockMinter extends Thread {
continue;
// No online accounts? (e.g. during startup)
if (Controller.getInstance().getOnlineAccounts().isEmpty())
if (OnlineAccountsManager.getInstance().getOnlineAccounts().isEmpty())
continue;
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
@@ -148,7 +148,8 @@ public class BlockMinter extends Thread {
}
}
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
@@ -478,7 +479,7 @@ public class BlockMinter extends Thread {
throw new DataException("Ignoring attempt to mint testing block for non-test chain!");
// Ensure mintingAccount is 'online' so blocks can be minted
Controller.getInstance().ensureTestingAccountsOnline(mintingAndOnlineAccounts);
OnlineAccountsManager.getInstance().ensureTestingAccountsOnline(mintingAndOnlineAccounts);
PrivateKeyAccount mintingAccount = mintingAndOnlineAccounts[0];
@@ -544,7 +545,7 @@ public class BlockMinter extends Thread {
}
NumberFormat formatter = new DecimalFormat("0.###E0");
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
// Loop through handshaked peers and check for any new block candidates
for (Peer peer : peers) {
if (peer.getCommonBlockData() != null && peer.getCommonBlockData().getCommonBlockSummary() != null) {

View File

@@ -29,10 +29,6 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import com.google.common.primitives.Longs;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.api.ApiService;
import org.qortal.api.DomainMapService;
import org.qortal.api.GatewayService;
@@ -43,11 +39,8 @@ import org.qortal.controller.arbitrary.*;
import org.qortal.controller.repository.PruneManager;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.data.transaction.ChatTransactionData;
@@ -65,7 +58,6 @@ import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.*;
public class Controller extends Thread {
@@ -88,25 +80,6 @@ public class Controller extends Thread {
private static final long NTP_PRE_SYNC_CHECK_PERIOD = 5 * 1000L; // ms
private static final long NTP_POST_SYNC_CHECK_PERIOD = 5 * 60 * 1000L; // ms
private static final long DELETE_EXPIRED_INTERVAL = 5 * 60 * 1000L; // ms
private static final int MAX_INCOMING_TRANSACTIONS = 5000;
/** Minimum time before considering an invalid unconfirmed transaction as "stale" */
public static final long INVALID_TRANSACTION_STALE_TIMEOUT = 30 * 60 * 1000L; // ms
/** Minimum frequency to re-request stale unconfirmed transactions from peers, to recheck validity */
public static final long INVALID_TRANSACTION_RECHECK_INTERVAL = 60 * 60 * 1000L; // ms\
/** Minimum frequency to re-request expired unconfirmed transactions from peers, to recheck validity
* This mainly exists to stop expired transactions from bloating the list */
public static final long EXPIRED_TRANSACTION_RECHECK_INTERVAL = 10 * 60 * 1000L; // ms
// To do with online accounts list
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 1 * 60 * 1000L; // ms
public static final long ONLINE_TIMESTAMP_MODULUS = 5 * 60 * 1000L;
private static final long LAST_SEEN_EXPIRY_PERIOD = (ONLINE_TIMESTAMP_MODULUS * 2) + (1 * 60 * 1000L);
/** How many (latest) blocks' worth of online accounts we cache */
private static final int MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS = 2;
private static final long ONLINE_ACCOUNTS_V2_PEER_VERSION = 0x0300020000L;
private static volatile boolean isStopping = false;
private static BlockMinter blockMinter = null;
@@ -138,25 +111,12 @@ public class Controller extends Thread {
private long ntpCheckTimestamp = startTime; // ms
private long deleteExpiredTimestamp = startTime + DELETE_EXPIRED_INTERVAL; // ms
private long onlineAccountsTasksTimestamp = startTime + ONLINE_ACCOUNTS_TASKS_INTERVAL; // ms
/** Whether we can mint new blocks, as reported by BlockMinter. */
private volatile boolean isMintingPossible = false;
/** List of incoming transaction that are in the import queue */
private List<TransactionData> incomingTransactions = Collections.synchronizedList(new ArrayList<>());
/** List of recent invalid unconfirmed transactions */
private Map<String, Long> invalidUnconfirmedTransactions = Collections.synchronizedMap(new HashMap<>());
/** Lock for only allowing one blockchain-modifying codepath at a time. e.g. synchronization or newly minted block. */
private final ReentrantLock blockchainLock = new ReentrantLock();
/** Cache of current 'online accounts' */
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
/** Cache of latest blocks' online accounts */
Deque<List<OnlineAccountData>> latestBlocksOnlineAccounts = new ArrayDeque<>(MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS);
// Stats
@XmlAccessorType(XmlAccessType.FIELD)
public static class StatsSnapshot {
@@ -469,6 +429,12 @@ public class Controller extends Thread {
ArbitraryDataStorageManager.getInstance().start();
ArbitraryDataRenderManager.getInstance().start();
LOGGER.info("Starting online accounts manager");
OnlineAccountsManager.getInstance().start();
LOGGER.info("Starting transaction importer");
TransactionImporter.getInstance().start();
// Auto-update service?
if (Settings.getInstance().isAutoUpdateEnabled()) {
LOGGER.info("Starting auto-update");
@@ -566,11 +532,6 @@ public class Controller extends Thread {
}
}
// Process incoming transactions queue
processIncomingTransactionsQueue();
// Clean up invalid incoming transactions list
cleanupInvalidTransactionsList(now);
// Clean up arbitrary data request cache
ArbitraryDataManager.getInstance().cleanupRequestCache(now);
// Clean up arbitrary data queues and lists
@@ -639,12 +600,6 @@ public class Controller extends Thread {
deleteExpiredTimestamp = now + DELETE_EXPIRED_INTERVAL;
deleteExpiredTransactions();
}
// Perform tasks to do with managing online accounts list
if (now >= onlineAccountsTasksTimestamp) {
onlineAccountsTasksTimestamp = now + ONLINE_ACCOUNTS_TASKS_INTERVAL;
performOnlineAccountsTasks();
}
}
} catch (InterruptedException e) {
// Clear interrupted flag so we can shutdown trim threads
@@ -762,7 +717,7 @@ public class Controller extends Thread {
return;
}
final int numberOfPeers = Network.getInstance().getHandshakedPeers().size();
final int numberOfPeers = Network.getInstance().getImmutableHandshakedPeers().size();
final int height = getChainHeight();
@@ -771,6 +726,10 @@ public class Controller extends Thread {
String actionText;
// Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading statuses.
// Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
synchronized (Synchronizer.getInstance().syncLock) {
if (this.isMintingPossible) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
@@ -780,10 +739,14 @@ public class Controller extends Thread {
actionText = Translator.INSTANCE.translate("SysTray", "CONNECTING");
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate()) {
else if (!this.isUpToDate(minLatestBlockTimestamp) && Synchronizer.getInstance().isSynchronizing()) {
actionText = String.format("%s - %d%%", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"), Synchronizer.getInstance().getSyncPercent());
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate(minLatestBlockTimestamp)) {
actionText = String.format("%s", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"));
SysTray.getInstance().setTrayIcon(3);
}
else {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
SysTray.getInstance().setTrayIcon(4);
@@ -833,120 +796,6 @@ public class Controller extends Thread {
}
}
// Incoming transactions queue
private boolean incomingTransactionQueueContains(byte[] signature) {
synchronized (incomingTransactions) {
return incomingTransactions.stream().anyMatch(t -> Arrays.equals(t.getSignature(), signature));
}
}
private void removeIncomingTransaction(byte[] signature) {
incomingTransactions.removeIf(t -> Arrays.equals(t.getSignature(), signature));
}
private void processIncomingTransactionsQueue() {
if (this.incomingTransactions.size() == 0) {
// Don't bother locking if there are no new transactions to process
return;
}
if (Synchronizer.getInstance().isSyncRequested() || Synchronizer.getInstance().isSynchronizing()) {
// Prioritize syncing, and don't attempt to lock
return;
}
try {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(2, TimeUnit.SECONDS)) {
LOGGER.trace(() -> String.format("Too busy to process incoming transactions queue"));
return;
}
} catch (InterruptedException e) {
LOGGER.info("Interrupted when trying to acquire blockchain lock");
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
LOGGER.debug("Processing incoming transactions queue (size {})...", this.incomingTransactions.size());
// Take a copy of incomingTransactions so we can release the lock
List<TransactionData>incomingTransactionsCopy = new ArrayList<>(this.incomingTransactions);
// Iterate through incoming transactions list
Iterator iterator = incomingTransactionsCopy.iterator();
while (iterator.hasNext()) {
if (isStopping) {
return;
}
if (Synchronizer.getInstance().isSyncRequestPending()) {
LOGGER.debug("Breaking out of transaction processing loop with {} remaining, because a sync request is pending", incomingTransactionsCopy.size());
return;
}
TransactionData transactionData = (TransactionData) iterator.next();
Transaction transaction = Transaction.fromData(repository, transactionData);
// Check signature
if (!transaction.isSignatureValid()) {
LOGGER.trace(() -> String.format("Ignoring %s transaction %s with invalid signature", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
ValidationResult validationResult = transaction.importAsUnconfirmed();
if (validationResult == ValidationResult.TRANSACTION_ALREADY_EXISTS) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s", Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
if (validationResult == ValidationResult.NO_BLOCKCHAIN_LOCK) {
LOGGER.trace(() -> String.format("Couldn't lock blockchain to import unconfirmed transaction", Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
if (validationResult != ValidationResult.OK) {
final String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace(() -> String.format("Ignoring invalid (%s) %s transaction %s", validationResult.name(), transactionData.getType().name(), signature58));
Long now = NTP.getTime();
if (now != null && now - transactionData.getTimestamp() > INVALID_TRANSACTION_STALE_TIMEOUT) {
Long expiryLength = INVALID_TRANSACTION_RECHECK_INTERVAL;
if (validationResult == ValidationResult.TIMESTAMP_TOO_OLD) {
// Use shorter recheck interval for expired transactions
expiryLength = EXPIRED_TRANSACTION_RECHECK_INTERVAL;
}
Long expiry = now + expiryLength;
LOGGER.debug("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Invalid, unconfirmed transaction has become stale - add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
removeIncomingTransaction(transactionData.getSignature());
continue;
}
LOGGER.debug(() -> String.format("Imported %s transaction %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing incoming transactions", e));
} finally {
LOGGER.debug("Finished processing incoming transactions queue");
blockchainLock.unlock();
}
}
private void cleanupInvalidTransactionsList(Long now) {
if (now == null) {
return;
}
// Periodically remove invalid unconfirmed transactions from the list, so that they can be fetched again
invalidUnconfirmedTransactions.entrySet().removeIf(entry -> entry.getValue() == null || entry.getValue() < now);
}
// Shutdown
@@ -975,6 +824,12 @@ public class Controller extends Thread {
ArbitraryDataStorageManager.getInstance().shutdown();
ArbitraryDataRenderManager.getInstance().shutdown();
LOGGER.info("Shutting down online accounts manager");
OnlineAccountsManager.getInstance().shutdown();
LOGGER.info("Shutting down transaction importer");
TransactionImporter.getInstance().shutdown();
if (blockMinter != null) {
LOGGER.info("Shutting down block minter");
blockMinter.shutdown();
@@ -1257,10 +1112,6 @@ public class Controller extends Thread {
onNetworkGetBlockMessage(peer, message);
break;
case TRANSACTION:
onNetworkTransactionMessage(peer, message);
break;
case GET_BLOCK_SUMMARIES:
onNetworkGetBlockSummariesMessage(peer, message);
break;
@@ -1274,31 +1125,35 @@ public class Controller extends Thread {
break;
case GET_TRANSACTION:
onNetworkGetTransactionMessage(peer, message);
TransactionImporter.getInstance().onNetworkGetTransactionMessage(peer, message);
break;
case TRANSACTION:
TransactionImporter.getInstance().onNetworkTransactionMessage(peer, message);
break;
case GET_UNCONFIRMED_TRANSACTIONS:
onNetworkGetUnconfirmedTransactionsMessage(peer, message);
TransactionImporter.getInstance().onNetworkGetUnconfirmedTransactionsMessage(peer, message);
break;
case TRANSACTION_SIGNATURES:
onNetworkTransactionSignaturesMessage(peer, message);
TransactionImporter.getInstance().onNetworkTransactionSignaturesMessage(peer, message);
break;
case GET_ONLINE_ACCOUNTS:
onNetworkGetOnlineAccountsMessage(peer, message);
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsMessage(peer, message);
break;
case ONLINE_ACCOUNTS:
onNetworkOnlineAccountsMessage(peer, message);
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsMessage(peer, message);
break;
case GET_ONLINE_ACCOUNTS_V2:
onNetworkGetOnlineAccountsV2Message(peer, message);
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsV2Message(peer, message);
break;
case ONLINE_ACCOUNTS_V2:
onNetworkOnlineAccountsV2Message(peer, message);
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsV2Message(peer, message);
break;
case GET_ARBITRARY_DATA:
@@ -1318,7 +1173,7 @@ public class Controller extends Thread {
break;
case ARBITRARY_SIGNATURES:
ArbitraryDataManager.getInstance().onNetworkArbitrarySignaturesMessage(peer, message);
// Not currently supported
break;
case GET_ARBITRARY_METADATA:
@@ -1335,6 +1190,7 @@ public class Controller extends Thread {
case TRADE_PRESENCES:
TradeBot.getInstance().onTradePresencesMessage(peer, message);
break;
default:
LOGGER.debug(() -> String.format("Unhandled %s message [ID %d] from peer %s", message.getType().name(), message.getId(), peer));
@@ -1434,16 +1290,6 @@ public class Controller extends Thread {
}
}
private void onNetworkTransactionMessage(Peer peer, Message message) {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
if (this.incomingTransactions.size() < MAX_INCOMING_TRANSACTIONS) {
if (!this.incomingTransactions.contains(transactionData)) {
this.incomingTransactions.add(transactionData);
}
}
}
private void onNetworkGetBlockSummariesMessage(Peer peer, Message message) {
GetBlockSummariesMessage getBlockSummariesMessage = (GetBlockSummariesMessage) message;
final byte[] parentSignature = getBlockSummariesMessage.getParentSignature();
@@ -1595,449 +1441,17 @@ public class Controller extends Thread {
Synchronizer.getInstance().requestSync();
}
private void onNetworkGetTransactionMessage(Peer peer, Message message) {
GetTransactionMessage getTransactionMessage = (GetTransactionMessage) message;
byte[] signature = getTransactionMessage.getSignature();
try (final Repository repository = RepositoryManager.getRepository()) {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (transactionData == null) {
LOGGER.debug(() -> String.format("Ignoring GET_TRANSACTION request from peer %s for unknown transaction %s", peer, Base58.encode(signature)));
// Send no response at all???
return;
}
Message transactionMessage = new TransactionMessage(transactionData);
transactionMessage.setId(message.getId());
if (!peer.sendMessage(transactionMessage))
peer.disconnect("failed to send transaction");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while send transaction %s to peer %s", Base58.encode(signature), peer), e);
}
}
private void onNetworkGetUnconfirmedTransactionsMessage(Peer peer, Message message) {
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = Collections.emptyList();
// If we're NOT up-to-date then don't send out unconfirmed transactions
// as it's possible they are already included in a later block that we don't have.
if (isUpToDate())
signatures = repository.getTransactionRepository().getUnconfirmedTransactionSignatures();
Message transactionSignaturesMessage = new TransactionSignaturesMessage(signatures);
if (!peer.sendMessage(transactionSignaturesMessage))
peer.disconnect("failed to send unconfirmed transaction signatures");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending unconfirmed transaction signatures to peer %s", peer), e);
}
}
private void onNetworkTransactionSignaturesMessage(Peer peer, Message message) {
TransactionSignaturesMessage transactionSignaturesMessage = (TransactionSignaturesMessage) message;
List<byte[]> signatures = transactionSignaturesMessage.getSignatures();
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
String signature58 = Base58.encode(signature);
if (invalidUnconfirmedTransactions.containsKey(signature58)) {
// Previously invalid transaction - don't keep requesting it
// It will be periodically removed from invalidUnconfirmedTransactions to allow for rechecks
continue;
}
// Ignore if this transaction is in the queue
if (incomingTransactionQueueContains(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing queued transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Do we have it already? (Before requesting transaction data itself)
if (repository.getTransactionRepository().exists(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Check isInterrupted() here and exit fast
if (Thread.currentThread().isInterrupted())
return;
// Fetch actual transaction data from peer
Message getTransactionMessage = new GetTransactionMessage(signature);
if (!peer.sendMessage(getTransactionMessage)) {
peer.disconnect("failed to request transaction");
return;
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing unconfirmed transactions from peer %s", peer), e);
}
}
private void onNetworkGetOnlineAccountsMessage(Peer peer, Message message) {
GetOnlineAccountsMessage getOnlineAccountsMessage = (GetOnlineAccountsMessage) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsMessage(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
private void onNetworkOnlineAccountsMessage(Peer peer, Message message) {
OnlineAccountsMessage onlineAccountsMessage = (OnlineAccountsMessage) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
private void onNetworkGetOnlineAccountsV2Message(Peer peer, Message message) {
GetOnlineAccountsV2Message getOnlineAccountsMessage = (GetOnlineAccountsV2Message) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsV2Message(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
private void onNetworkOnlineAccountsV2Message(Peer peer, Message message) {
OnlineAccountsV2Message onlineAccountsMessage = (OnlineAccountsV2Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
// Utilities
private void verifyAndAddAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
final Long now = NTP.getTime();
if (now == null)
return;
PublicKeyAccount otherAccount = new PublicKeyAccount(repository, onlineAccountData.getPublicKey());
// Check timestamp is 'recent' here
if (Math.abs(onlineAccountData.getTimestamp() - now) > ONLINE_TIMESTAMP_MODULUS * 2) {
LOGGER.trace(() -> String.format("Rejecting online account %s with out of range timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
return;
}
// Verify
byte[] data = Longs.toByteArray(onlineAccountData.getTimestamp());
if (!otherAccount.verify(onlineAccountData.getSignature(), data)) {
LOGGER.trace(() -> String.format("Rejecting invalid online account %s", otherAccount.getAddress()));
return;
}
// Qortal: check online account is actually reward-share
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(onlineAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(onlineAccountData.getPublicKey())));
return;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
return;
}
synchronized (this.onlineAccounts) {
OnlineAccountData existingAccountData = this.onlineAccounts.stream().filter(account -> Arrays.equals(account.getPublicKey(), onlineAccountData.getPublicKey())).findFirst().orElse(null);
if (existingAccountData != null) {
if (existingAccountData.getTimestamp() < onlineAccountData.getTimestamp()) {
this.onlineAccounts.remove(existingAccountData);
LOGGER.trace(() -> String.format("Updated online account %s with timestamp %d (was %d)", otherAccount.getAddress(), onlineAccountData.getTimestamp(), existingAccountData.getTimestamp()));
} else {
LOGGER.trace(() -> String.format("Not updating existing online account %s", otherAccount.getAddress()));
return;
}
} else {
LOGGER.trace(() -> String.format("Added online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
}
this.onlineAccounts.add(onlineAccountData);
}
}
public void ensureTestingAccountsOnline(PrivateKeyAccount... onlineAccounts) {
if (!BlockChain.getInstance().isTestChain()) {
LOGGER.warn("Ignoring attempt to ensure test account is online for non-test chain!");
return;
}
final Long now = NTP.getTime();
if (now == null)
return;
final long onlineAccountsTimestamp = Controller.toOnlineAccountTimestamp(now);
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
synchronized (this.onlineAccounts) {
this.onlineAccounts.clear();
for (PrivateKeyAccount onlineAccount : onlineAccounts) {
// Check mintingAccount is actually reward-share?
byte[] signature = onlineAccount.sign(timestampBytes);
byte[] publicKey = onlineAccount.getPublicKey();
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
this.onlineAccounts.add(ourOnlineAccountData);
}
}
}
private void performOnlineAccountsTasks() {
final Long now = NTP.getTime();
if (now == null)
return;
// Expire old entries
final long cutoffThreshold = now - LAST_SEEN_EXPIRY_PERIOD;
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
if (onlineAccountData.getTimestamp() < cutoffThreshold) {
iterator.remove();
LOGGER.trace(() -> {
PublicKeyAccount otherAccount = new PublicKeyAccount(null, onlineAccountData.getPublicKey());
return String.format("Removed expired online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp());
});
}
}
}
// Request data from other peers?
if ((this.onlineAccountsTasksTimestamp % ONLINE_ACCOUNTS_BROADCAST_INTERVAL) < ONLINE_ACCOUNTS_TASKS_INTERVAL) {
List<OnlineAccountData> safeOnlineAccounts;
synchronized (this.onlineAccounts) {
safeOnlineAccounts = new ArrayList<>(this.onlineAccounts);
}
Message messageV1 = new GetOnlineAccountsMessage(safeOnlineAccounts);
Message messageV2 = new GetOnlineAccountsV2Message(safeOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
}
// Refresh our online accounts signatures?
sendOurOnlineAccountsInfo();
}
private void sendOurOnlineAccountsInfo() {
final Long now = NTP.getTime();
if (now != null) {
List<MintingAccountData> mintingAccounts;
try (final Repository repository = RepositoryManager.getRepository()) {
mintingAccounts = repository.getAccountRepository().getMintingAccounts();
// We have no accounts, but don't reset timestamp
if (mintingAccounts.isEmpty())
return;
// Only reward-share accounts allowed
Iterator<MintingAccountData> iterator = mintingAccounts.iterator();
int i = 0;
while (iterator.hasNext()) {
MintingAccountData mintingAccountData = iterator.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
iterator.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
iterator.remove();
continue;
}
if (++i > 2) {
iterator.remove();
continue;
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to fetch minting accounts: %s", e.getMessage()));
return;
}
// 'current' timestamp
final long onlineAccountsTimestamp = Controller.toOnlineAccountTimestamp(now);
boolean hasInfoChanged = false;
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
MINTING_ACCOUNTS:
for (MintingAccountData mintingAccountData : mintingAccounts) {
PrivateKeyAccount mintingAccount = new PrivateKeyAccount(null, mintingAccountData.getPrivateKey());
byte[] signature = mintingAccount.sign(timestampBytes);
byte[] publicKey = mintingAccount.getPublicKey();
// Our account is online
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData existingOnlineAccountData = iterator.next();
if (Arrays.equals(existingOnlineAccountData.getPublicKey(), ourOnlineAccountData.getPublicKey())) {
// If our online account is already present, with same timestamp, then move on to next mintingAccount
if (existingOnlineAccountData.getTimestamp() == onlineAccountsTimestamp)
continue MINTING_ACCOUNTS;
// If our online account is already present, but with older timestamp, then remove it
iterator.remove();
break;
}
}
this.onlineAccounts.add(ourOnlineAccountData);
}
LOGGER.trace(() -> String.format("Added our online account %s with timestamp %d", mintingAccount.getAddress(), onlineAccountsTimestamp));
ourOnlineAccounts.add(ourOnlineAccountData);
hasInfoChanged = true;
}
if (!hasInfoChanged)
return;
Message messageV1 = new OnlineAccountsMessage(ourOnlineAccounts);
Message messageV2 = new OnlineAccountsV2Message(ourOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
LOGGER.trace(() -> String.format("Broadcasted %d online account%s with timestamp %d", ourOnlineAccounts.size(), (ourOnlineAccounts.size() != 1 ? "s" : ""), onlineAccountsTimestamp));
}
}
public static long toOnlineAccountTimestamp(long timestamp) {
return (timestamp / ONLINE_TIMESTAMP_MODULUS) * ONLINE_TIMESTAMP_MODULUS;
}
/** Returns list of online accounts with timestamp recent enough to be considered currently online. */
public List<OnlineAccountData> getOnlineAccounts() {
final long onlineTimestamp = Controller.toOnlineAccountTimestamp(NTP.getTime());
synchronized (this.onlineAccounts) {
return this.onlineAccounts.stream().filter(account -> account.getTimestamp() == onlineTimestamp).collect(Collectors.toList());
}
}
/** Returns cached, unmodifiable list of latest block's online accounts. */
public List<OnlineAccountData> getLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
return this.latestBlocksOnlineAccounts.peekFirst();
}
}
/** Caches list of latest block's online accounts. Typically called by Block.process() */
public void pushLatestBlocksOnlineAccounts(List<OnlineAccountData> latestBlocksOnlineAccounts) {
synchronized (this.latestBlocksOnlineAccounts) {
if (this.latestBlocksOnlineAccounts.size() == MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS)
this.latestBlocksOnlineAccounts.pollLast();
this.latestBlocksOnlineAccounts.addFirst(latestBlocksOnlineAccounts == null
? Collections.emptyList()
: Collections.unmodifiableList(latestBlocksOnlineAccounts));
}
}
/** Reverts list of latest block's online accounts. Typically called by Block.orphan() */
public void popLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
this.latestBlocksOnlineAccounts.pollFirst();
}
}
/** Returns a list of peers that are not misbehaving, and have a recent block. */
public List<Peer> getRecentBehavingPeers() {
final Long minLatestBlockTimestamp = getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return null;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Filter out unsuitable peers
Iterator<Peer> iterator = peers.iterator();
@@ -2086,7 +1500,8 @@ public class Controller extends Thread {
if (latestBlockData == null || latestBlockData.getTimestamp() < minLatestBlockTimestamp)
return false;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
if (peers == null)
return false;

View File

@@ -0,0 +1,524 @@
package org.qortal.controller;
import com.google.common.primitives.Longs;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.*;
import java.util.stream.Collectors;
public class OnlineAccountsManager extends Thread {
private class OurOnlineAccountsThread extends Thread {
public void run() {
try {
while (!isStopping) {
Thread.sleep(10000L);
// Refresh our online accounts signatures?
sendOurOnlineAccountsInfo();
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
}
}
private static final Logger LOGGER = LogManager.getLogger(OnlineAccountsManager.class);
private static OnlineAccountsManager instance;
private volatile boolean isStopping = false;
// To do with online accounts list
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 1 * 60 * 1000L; // ms
public static final long ONLINE_TIMESTAMP_MODULUS = 5 * 60 * 1000L;
private static final long LAST_SEEN_EXPIRY_PERIOD = (ONLINE_TIMESTAMP_MODULUS * 2) + (1 * 60 * 1000L);
/** How many (latest) blocks' worth of online accounts we cache */
private static final int MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS = 2;
private static final long ONLINE_ACCOUNTS_V2_PEER_VERSION = 0x0300020000L;
private long onlineAccountsTasksTimestamp = Controller.startTime + ONLINE_ACCOUNTS_TASKS_INTERVAL; // ms
private final List<OnlineAccountData> onlineAccountsImportQueue = Collections.synchronizedList(new ArrayList<>());
/** Cache of current 'online accounts' */
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
/** Cache of latest blocks' online accounts */
Deque<List<OnlineAccountData>> latestBlocksOnlineAccounts = new ArrayDeque<>(MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS);
public OnlineAccountsManager() {
}
public static synchronized OnlineAccountsManager getInstance() {
if (instance == null) {
instance = new OnlineAccountsManager();
}
return instance;
}
public void run() {
// Start separate thread to prepare our online accounts
// This could be converted to a thread pool later if more concurrency is needed
OurOnlineAccountsThread ourOnlineAccountsThread = new OurOnlineAccountsThread();
ourOnlineAccountsThread.start();
try {
while (!Controller.isStopping()) {
Thread.sleep(100L);
final Long now = NTP.getTime();
if (now == null) {
continue;
}
// Perform tasks to do with managing online accounts list
if (now >= onlineAccountsTasksTimestamp) {
onlineAccountsTasksTimestamp = now + ONLINE_ACCOUNTS_TASKS_INTERVAL;
performOnlineAccountsTasks();
}
// Process queued online account verifications
this.processOnlineAccountsImportQueue();
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
ourOnlineAccountsThread.interrupt();
}
public void shutdown() {
isStopping = true;
this.interrupt();
}
// Online accounts import queue
private void processOnlineAccountsImportQueue() {
if (this.onlineAccountsImportQueue.isEmpty()) {
// Nothing to do
return;
}
LOGGER.debug("Processing online accounts import queue (size: {})", this.onlineAccountsImportQueue.size());
try (final Repository repository = RepositoryManager.getRepository()) {
List<OnlineAccountData> onlineAccountDataCopy = new ArrayList<>(this.onlineAccountsImportQueue);
for (OnlineAccountData onlineAccountData : onlineAccountDataCopy) {
if (isStopping) {
return;
}
this.verifyAndAddAccount(repository, onlineAccountData);
// Remove from queue
onlineAccountsImportQueue.remove(onlineAccountData);
}
LOGGER.debug("Finished processing online accounts import queue");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts"), e);
}
}
// Utilities
private void verifyAndAddAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
final Long now = NTP.getTime();
if (now == null)
return;
PublicKeyAccount otherAccount = new PublicKeyAccount(repository, onlineAccountData.getPublicKey());
// Check timestamp is 'recent' here
if (Math.abs(onlineAccountData.getTimestamp() - now) > ONLINE_TIMESTAMP_MODULUS * 2) {
LOGGER.trace(() -> String.format("Rejecting online account %s with out of range timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
return;
}
// Verify
byte[] data = Longs.toByteArray(onlineAccountData.getTimestamp());
if (!otherAccount.verify(onlineAccountData.getSignature(), data)) {
LOGGER.trace(() -> String.format("Rejecting invalid online account %s", otherAccount.getAddress()));
return;
}
// Qortal: check online account is actually reward-share
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(onlineAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(onlineAccountData.getPublicKey())));
return;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
return;
}
synchronized (this.onlineAccounts) {
OnlineAccountData existingAccountData = this.onlineAccounts.stream().filter(account -> Arrays.equals(account.getPublicKey(), onlineAccountData.getPublicKey())).findFirst().orElse(null);
if (existingAccountData != null) {
if (existingAccountData.getTimestamp() < onlineAccountData.getTimestamp()) {
this.onlineAccounts.remove(existingAccountData);
LOGGER.trace(() -> String.format("Updated online account %s with timestamp %d (was %d)", otherAccount.getAddress(), onlineAccountData.getTimestamp(), existingAccountData.getTimestamp()));
} else {
LOGGER.trace(() -> String.format("Not updating existing online account %s", otherAccount.getAddress()));
return;
}
} else {
LOGGER.trace(() -> String.format("Added online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
}
this.onlineAccounts.add(onlineAccountData);
}
}
public void ensureTestingAccountsOnline(PrivateKeyAccount... onlineAccounts) {
if (!BlockChain.getInstance().isTestChain()) {
LOGGER.warn("Ignoring attempt to ensure test account is online for non-test chain!");
return;
}
final Long now = NTP.getTime();
if (now == null)
return;
final long onlineAccountsTimestamp = toOnlineAccountTimestamp(now);
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
synchronized (this.onlineAccounts) {
this.onlineAccounts.clear();
for (PrivateKeyAccount onlineAccount : onlineAccounts) {
// Check mintingAccount is actually reward-share?
byte[] signature = onlineAccount.sign(timestampBytes);
byte[] publicKey = onlineAccount.getPublicKey();
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
this.onlineAccounts.add(ourOnlineAccountData);
}
}
}
private void performOnlineAccountsTasks() {
final Long now = NTP.getTime();
if (now == null)
return;
// Expire old entries
final long cutoffThreshold = now - LAST_SEEN_EXPIRY_PERIOD;
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
if (onlineAccountData.getTimestamp() < cutoffThreshold) {
iterator.remove();
LOGGER.trace(() -> {
PublicKeyAccount otherAccount = new PublicKeyAccount(null, onlineAccountData.getPublicKey());
return String.format("Removed expired online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp());
});
}
}
}
// Request data from other peers?
if ((this.onlineAccountsTasksTimestamp % ONLINE_ACCOUNTS_BROADCAST_INTERVAL) < ONLINE_ACCOUNTS_TASKS_INTERVAL) {
List<OnlineAccountData> safeOnlineAccounts;
synchronized (this.onlineAccounts) {
safeOnlineAccounts = new ArrayList<>(this.onlineAccounts);
}
Message messageV1 = new GetOnlineAccountsMessage(safeOnlineAccounts);
Message messageV2 = new GetOnlineAccountsV2Message(safeOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
}
}
private void sendOurOnlineAccountsInfo() {
final Long now = NTP.getTime();
if (now == null) {
return;
}
List<MintingAccountData> mintingAccounts;
try (final Repository repository = RepositoryManager.getRepository()) {
mintingAccounts = repository.getAccountRepository().getMintingAccounts();
// We have no accounts, but don't reset timestamp
if (mintingAccounts.isEmpty())
return;
// Only reward-share accounts allowed
Iterator<MintingAccountData> iterator = mintingAccounts.iterator();
int i = 0;
while (iterator.hasNext()) {
MintingAccountData mintingAccountData = iterator.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
iterator.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
iterator.remove();
continue;
}
if (++i > 1+1) {
iterator.remove();
continue;
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to fetch minting accounts: %s", e.getMessage()));
return;
}
// 'current' timestamp
final long onlineAccountsTimestamp = toOnlineAccountTimestamp(now);
boolean hasInfoChanged = false;
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
MINTING_ACCOUNTS:
for (MintingAccountData mintingAccountData : mintingAccounts) {
PrivateKeyAccount mintingAccount = new PrivateKeyAccount(null, mintingAccountData.getPrivateKey());
byte[] signature = mintingAccount.sign(timestampBytes);
byte[] publicKey = mintingAccount.getPublicKey();
// Our account is online
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData existingOnlineAccountData = iterator.next();
if (Arrays.equals(existingOnlineAccountData.getPublicKey(), ourOnlineAccountData.getPublicKey())) {
// If our online account is already present, with same timestamp, then move on to next mintingAccount
if (existingOnlineAccountData.getTimestamp() == onlineAccountsTimestamp)
continue MINTING_ACCOUNTS;
// If our online account is already present, but with older timestamp, then remove it
iterator.remove();
break;
}
}
this.onlineAccounts.add(ourOnlineAccountData);
}
LOGGER.trace(() -> String.format("Added our online account %s with timestamp %d", mintingAccount.getAddress(), onlineAccountsTimestamp));
ourOnlineAccounts.add(ourOnlineAccountData);
hasInfoChanged = true;
}
if (!hasInfoChanged)
return;
Message messageV1 = new OnlineAccountsMessage(ourOnlineAccounts);
Message messageV2 = new OnlineAccountsV2Message(ourOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
LOGGER.trace(() -> String.format("Broadcasted %d online account%s with timestamp %d", ourOnlineAccounts.size(), (ourOnlineAccounts.size() != 1 ? "s" : ""), onlineAccountsTimestamp));
}
public static long toOnlineAccountTimestamp(long timestamp) {
return (timestamp / ONLINE_TIMESTAMP_MODULUS) * ONLINE_TIMESTAMP_MODULUS;
}
/** Returns list of online accounts with timestamp recent enough to be considered currently online. */
public List<OnlineAccountData> getOnlineAccounts() {
final long onlineTimestamp = toOnlineAccountTimestamp(NTP.getTime());
synchronized (this.onlineAccounts) {
return this.onlineAccounts.stream().filter(account -> account.getTimestamp() == onlineTimestamp).collect(Collectors.toList());
}
}
/** Returns cached, unmodifiable list of latest block's online accounts. */
public List<OnlineAccountData> getLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
return this.latestBlocksOnlineAccounts.peekFirst();
}
}
/** Caches list of latest block's online accounts. Typically called by Block.process() */
public void pushLatestBlocksOnlineAccounts(List<OnlineAccountData> latestBlocksOnlineAccounts) {
synchronized (this.latestBlocksOnlineAccounts) {
if (this.latestBlocksOnlineAccounts.size() == MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS)
this.latestBlocksOnlineAccounts.pollLast();
this.latestBlocksOnlineAccounts.addFirst(latestBlocksOnlineAccounts == null
? Collections.emptyList()
: Collections.unmodifiableList(latestBlocksOnlineAccounts));
}
}
/** Reverts list of latest block's online accounts. Typically called by Block.orphan() */
public void popLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
this.latestBlocksOnlineAccounts.pollFirst();
}
}
// Network handlers
public void onNetworkGetOnlineAccountsMessage(Peer peer, Message message) {
GetOnlineAccountsMessage getOnlineAccountsMessage = (GetOnlineAccountsMessage) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsMessage(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
public void onNetworkOnlineAccountsMessage(Peer peer, Message message) {
OnlineAccountsMessage onlineAccountsMessage = (OnlineAccountsMessage) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
public void onNetworkGetOnlineAccountsV2Message(Peer peer, Message message) {
GetOnlineAccountsV2Message getOnlineAccountsMessage = (GetOnlineAccountsV2Message) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsV2Message(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
public void onNetworkOnlineAccountsV2Message(Peer peer, Message message) {
OnlineAccountsV2Message onlineAccountsMessage = (OnlineAccountsV2Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.debug(String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
int importCount = 0;
// Add any online accounts to the queue that aren't already present
for (OnlineAccountData onlineAccountData : peersOnlineAccounts) {
// Do we already know about this online account data?
if (onlineAccounts.contains(onlineAccountData)) {
continue;
}
// Is it already in the import queue?
if (onlineAccountsImportQueue.contains(onlineAccountData)) {
continue;
}
onlineAccountsImportQueue.add(onlineAccountData);
importCount++;
}
LOGGER.debug(String.format("Added %d online accounts to queue", importCount));
}
}

View File

@@ -95,7 +95,7 @@ public class Synchronizer extends Thread {
private static Synchronizer instance;
public enum SynchronizationResult {
OK, NOTHING_TO_DO, GENESIS_ONLY, NO_COMMON_BLOCK, TOO_DIVERGENT, NO_REPLY, INFERIOR_CHAIN, INVALID_DATA, NO_BLOCKCHAIN_LOCK, REPOSITORY_ISSUE, SHUTTING_DOWN;
OK, NOTHING_TO_DO, GENESIS_ONLY, NO_COMMON_BLOCK, TOO_DIVERGENT, NO_REPLY, INFERIOR_CHAIN, INVALID_DATA, NO_BLOCKCHAIN_LOCK, REPOSITORY_ISSUE, SHUTTING_DOWN, CHAIN_TIP_TOO_OLD;
}
public static class NewChainTipEvent implements Event {
@@ -173,6 +173,12 @@ public class Synchronizer extends Thread {
public Integer getSyncPercent() {
synchronized (this.syncLock) {
// Report as 100% synced if the latest block is within the last 30 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
if (Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
return 100;
}
return this.isSynchronizing ? this.syncPercent : null;
}
}
@@ -195,7 +201,8 @@ public class Synchronizer extends Thread {
if (this.isSynchronizing)
return true;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
@@ -211,7 +218,8 @@ public class Synchronizer extends Thread {
checkRecoveryModeForPeers(peers);
if (recoveryMode) {
peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
peers.removeIf(Controller.hasOnlyGenesisBlock);
peers.removeIf(Controller.hasMisbehaved);
peers.removeIf(Controller.hasOldVersion);
@@ -238,6 +246,12 @@ public class Synchronizer extends Thread {
// We may have added more inferior chain tips when comparing peers, so remove any peers that are currently on those chains
peers.removeIf(Controller.hasInferiorChainTip);
// Remove any peers that are no longer on a recent block since the last check
// Except for times when we're in recovery mode, in which case we need to keep them
if (!recoveryMode) {
peers.removeIf(Controller.hasNoRecentBlock);
}
final int peersRemoved = peersBeforeComparison - peers.size();
if (peersRemoved > 0 && peers.size() > 0)
LOGGER.debug(String.format("Ignoring %d peers on inferior chains. Peers remaining: %d", peersRemoved, peers.size()));
@@ -316,6 +330,7 @@ public class Synchronizer extends Thread {
case NO_REPLY:
case NO_BLOCKCHAIN_LOCK:
case REPOSITORY_ISSUE:
case CHAIN_TIP_TOO_OLD:
// These are minor failure results so fine to try again
LOGGER.debug(() -> String.format("Failed to synchronize with peer %s (%s)", peer, syncResult.name()));
break;
@@ -370,7 +385,7 @@ public class Synchronizer extends Thread {
}
private boolean checkRecoveryModeForPeers(List<Peer> qualifiedPeers) {
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
if (handshakedPeers.size() > 0) {
// There is at least one handshaked peer
@@ -555,7 +570,7 @@ public class Synchronizer extends Thread {
// If our latest block is very old, it's best that we don't try and determine the best peers to sync to.
// This is because it can involve very large chain comparisons, which is too intensive.
// In reality, most forking problems occur near the chain tips, so we will reserve this functionality for those situations.
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return peers;
@@ -711,6 +726,7 @@ public class Synchronizer extends Thread {
LOGGER.debug(String.format("Listing peers with common block %.8s...", Base58.encode(commonBlockSummary.getSignature())));
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
final CommonBlockData peerCommonBlockData = peer.getCommonBlockData();
@@ -721,6 +737,14 @@ public class Synchronizer extends Thread {
continue;
}
// If peer is our of date (since our last check), we should exclude it from this round
minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (peerLastBlockTimestamp == null || peerLastBlockTimestamp < minLatestBlockTimestamp) {
LOGGER.debug(String.format("Peer %s is out of date - removing it from this round", peer));
peers.remove(peer);
continue;
}
final List<BlockSummaryData> peerBlockSummariesAfterCommonBlock = peerCommonBlockData.getBlockSummariesAfterCommonBlock();
populateBlockSummariesMinterLevels(repository, peerBlockSummariesAfterCommonBlock);
@@ -1283,6 +1307,16 @@ public class Synchronizer extends Thread {
return SynchronizationResult.INVALID_DATA;
}
// Final check to make sure the peer isn't out of date (except for when we're in recovery mode)
if (!recoveryMode && peer.getChainTipData() != null) {
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
if (peerLastBlockTimestamp == null || peerLastBlockTimestamp < minLatestBlockTimestamp) {
LOGGER.info(String.format("Peer %s is out of date, so abandoning sync attempt", peer));
return SynchronizationResult.CHAIN_TIP_TOO_OLD;
}
}
byte[] nextPeerSignature = peerBlockSignatures.get(0);
int nextHeight = height + 1;

View File

@@ -0,0 +1,354 @@
package org.qortal.controller;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Peer;
import org.qortal.network.message.GetTransactionMessage;
import org.qortal.network.message.Message;
import org.qortal.network.message.TransactionMessage;
import org.qortal.network.message.TransactionSignaturesMessage;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.Transaction;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
public class TransactionImporter extends Thread {
private static final Logger LOGGER = LogManager.getLogger(TransactionImporter.class);
private static TransactionImporter instance;
private volatile boolean isStopping = false;
private static final int MAX_INCOMING_TRANSACTIONS = 5000;
/** Minimum time before considering an invalid unconfirmed transaction as "stale" */
public static final long INVALID_TRANSACTION_STALE_TIMEOUT = 30 * 60 * 1000L; // ms
/** Minimum frequency to re-request stale unconfirmed transactions from peers, to recheck validity */
public static final long INVALID_TRANSACTION_RECHECK_INTERVAL = 60 * 60 * 1000L; // ms\
/** Minimum frequency to re-request expired unconfirmed transactions from peers, to recheck validity
* This mainly exists to stop expired transactions from bloating the list */
public static final long EXPIRED_TRANSACTION_RECHECK_INTERVAL = 10 * 60 * 1000L; // ms
/** Map of incoming transaction that are in the import queue. Key is transaction data, value is whether signature has been validated. */
private final Map<TransactionData, Boolean> incomingTransactions = Collections.synchronizedMap(new HashMap<>());
/** Map of recent invalid unconfirmed transactions. Key is base58 transaction signature, value is do-not-request expiry timestamp. */
private final Map<String, Long> invalidUnconfirmedTransactions = Collections.synchronizedMap(new HashMap<>());
public static synchronized TransactionImporter getInstance() {
if (instance == null) {
instance = new TransactionImporter();
}
return instance;
}
@Override
public void run() {
try {
while (!Controller.isStopping()) {
Thread.sleep(1000L);
// Process incoming transactions queue
processIncomingTransactionsQueue();
// Clean up invalid incoming transactions list
cleanupInvalidTransactionsList(NTP.getTime());
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
}
public void shutdown() {
isStopping = true;
this.interrupt();
}
// Incoming transactions queue
private boolean incomingTransactionQueueContains(byte[] signature) {
synchronized (incomingTransactions) {
return incomingTransactions.keySet().stream().anyMatch(t -> Arrays.equals(t.getSignature(), signature));
}
}
private void removeIncomingTransaction(byte[] signature) {
incomingTransactions.keySet().removeIf(t -> Arrays.equals(t.getSignature(), signature));
}
private void processIncomingTransactionsQueue() {
if (this.incomingTransactions.isEmpty()) {
// Nothing to do?
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Take a snapshot of incomingTransactions, so we don't need to lock it while processing
Map<TransactionData, Boolean> incomingTransactionsCopy = Map.copyOf(this.incomingTransactions);
int unvalidatedCount = Collections.frequency(incomingTransactionsCopy.values(), Boolean.FALSE);
int validatedCount = 0;
if (unvalidatedCount > 0) {
LOGGER.debug("Validating signatures in incoming transactions queue (size {})...", unvalidatedCount);
}
List<Transaction> sigValidTransactions = new ArrayList<>();
// Signature validation round - does not require blockchain lock
for (Map.Entry<TransactionData, Boolean> transactionEntry : incomingTransactionsCopy.entrySet()) {
// Quick exit?
if (isStopping) {
return;
}
TransactionData transactionData = transactionEntry.getKey();
Transaction transaction = Transaction.fromData(repository, transactionData);
// Only validate signature if we haven't already done so
Boolean isSigValid = transactionEntry.getValue();
if (!Boolean.TRUE.equals(isSigValid)) {
if (!transaction.isSignatureValid()) {
String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace("Ignoring {} transaction {} with invalid signature", transactionData.getType().name(), signature58);
removeIncomingTransaction(transactionData.getSignature());
// Also add to invalidIncomingTransactions map
Long now = NTP.getTime();
if (now != null) {
Long expiry = now + INVALID_TRANSACTION_RECHECK_INTERVAL;
LOGGER.trace("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
continue;
}
else {
// Count the number that were validated in this round, for logging purposes
validatedCount++;
}
// Add mark signature as valid if transaction still exists in import queue
incomingTransactions.computeIfPresent(transactionData, (k, v) -> Boolean.TRUE);
} else {
LOGGER.trace(() -> String.format("Transaction %s known to have valid signature", Base58.encode(transactionData.getSignature())));
}
// Signature valid - add to shortlist
sigValidTransactions.add(transaction);
}
if (unvalidatedCount > 0) {
LOGGER.debug("Finished validating signatures in incoming transactions queue (valid this round: {}, total pending import: {})...", validatedCount, sigValidTransactions.size());
}
if (sigValidTransactions.isEmpty()) {
// Don't bother locking if there are no new transactions to process
return;
}
if (Synchronizer.getInstance().isSyncRequested() || Synchronizer.getInstance().isSynchronizing()) {
// Prioritize syncing, and don't attempt to lock
// Signature validity is retained in the incomingTransactions map, to avoid the above work being wasted
return;
}
try {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(2, TimeUnit.SECONDS)) {
// Signature validity is retained in the incomingTransactions map, to avoid the above work being wasted
LOGGER.debug("Too busy to process incoming transactions queue");
return;
}
} catch (InterruptedException e) {
LOGGER.debug("Interrupted when trying to acquire blockchain lock");
return;
}
LOGGER.debug("Processing incoming transactions queue (size {})...", sigValidTransactions.size());
// Import transactions with valid signatures
try {
for (int i = 0; i < sigValidTransactions.size(); ++i) {
if (isStopping) {
return;
}
if (Synchronizer.getInstance().isSyncRequestPending()) {
LOGGER.debug("Breaking out of transaction processing with {} remaining, because a sync request is pending", sigValidTransactions.size() - i);
return;
}
Transaction transaction = sigValidTransactions.get(i);
TransactionData transactionData = transaction.getTransactionData();
Transaction.ValidationResult validationResult = transaction.importAsUnconfirmed();
switch (validationResult) {
case TRANSACTION_ALREADY_EXISTS: {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s", Base58.encode(transactionData.getSignature())));
break;
}
case NO_BLOCKCHAIN_LOCK: {
// Is this even possible considering we acquired blockchain lock above?
LOGGER.trace(() -> String.format("Couldn't lock blockchain to import unconfirmed transaction %s", Base58.encode(transactionData.getSignature())));
break;
}
case OK: {
LOGGER.debug(() -> String.format("Imported %s transaction %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
break;
}
// All other invalid cases:
default: {
final String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace(() -> String.format("Ignoring invalid (%s) %s transaction %s", validationResult.name(), transactionData.getType().name(), signature58));
Long now = NTP.getTime();
if (now != null && now - transactionData.getTimestamp() > INVALID_TRANSACTION_STALE_TIMEOUT) {
Long expiryLength = INVALID_TRANSACTION_RECHECK_INTERVAL;
if (validationResult == Transaction.ValidationResult.TIMESTAMP_TOO_OLD) {
// Use shorter recheck interval for expired transactions
expiryLength = EXPIRED_TRANSACTION_RECHECK_INTERVAL;
}
Long expiry = now + expiryLength;
LOGGER.trace("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Invalid, unconfirmed transaction has become stale - add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
}
}
// Transaction has been processed, even if only to reject it
removeIncomingTransaction(transactionData.getSignature());
}
} finally {
LOGGER.debug("Finished processing incoming transactions queue");
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.unlock();
}
} catch (DataException e) {
LOGGER.error("Repository issue while processing incoming transactions", e);
}
}
private void cleanupInvalidTransactionsList(Long now) {
if (now == null) {
return;
}
// Periodically remove invalid unconfirmed transactions from the list, so that they can be fetched again
invalidUnconfirmedTransactions.entrySet().removeIf(entry -> entry.getValue() == null || entry.getValue() < now);
}
// Network handlers
public void onNetworkTransactionMessage(Peer peer, Message message) {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
if (this.incomingTransactions.size() < MAX_INCOMING_TRANSACTIONS) {
synchronized (this.incomingTransactions) {
if (!incomingTransactionQueueContains(transactionData.getSignature())) {
this.incomingTransactions.put(transactionData, Boolean.FALSE);
}
}
}
}
public void onNetworkGetTransactionMessage(Peer peer, Message message) {
GetTransactionMessage getTransactionMessage = (GetTransactionMessage) message;
byte[] signature = getTransactionMessage.getSignature();
try (final Repository repository = RepositoryManager.getRepository()) {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (transactionData == null) {
LOGGER.debug(() -> String.format("Ignoring GET_TRANSACTION request from peer %s for unknown transaction %s", peer, Base58.encode(signature)));
// Send no response at all???
return;
}
Message transactionMessage = new TransactionMessage(transactionData);
transactionMessage.setId(message.getId());
if (!peer.sendMessage(transactionMessage))
peer.disconnect("failed to send transaction");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while send transaction %s to peer %s", Base58.encode(signature), peer), e);
}
}
public void onNetworkGetUnconfirmedTransactionsMessage(Peer peer, Message message) {
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = Collections.emptyList();
// If we're NOT up-to-date then don't send out unconfirmed transactions
// as it's possible they are already included in a later block that we don't have.
if (Controller.getInstance().isUpToDate())
signatures = repository.getTransactionRepository().getUnconfirmedTransactionSignatures();
Message transactionSignaturesMessage = new TransactionSignaturesMessage(signatures);
if (!peer.sendMessage(transactionSignaturesMessage))
peer.disconnect("failed to send unconfirmed transaction signatures");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending unconfirmed transaction signatures to peer %s", peer), e);
}
}
public void onNetworkTransactionSignaturesMessage(Peer peer, Message message) {
TransactionSignaturesMessage transactionSignaturesMessage = (TransactionSignaturesMessage) message;
List<byte[]> signatures = transactionSignaturesMessage.getSignatures();
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
String signature58 = Base58.encode(signature);
if (invalidUnconfirmedTransactions.containsKey(signature58)) {
// Previously invalid transaction - don't keep requesting it
// It will be periodically removed from invalidUnconfirmedTransactions to allow for rechecks
continue;
}
// Ignore if this transaction is in the queue
if (incomingTransactionQueueContains(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing queued transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Do we have it already? (Before requesting transaction data itself)
if (repository.getTransactionRepository().exists(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Check isInterrupted() here and exit fast
if (Thread.currentThread().isInterrupted())
return;
// Fetch actual transaction data from peer
Message getTransactionMessage = new GetTransactionMessage(signature);
if (!peer.sendMessage(getTransactionMessage)) {
peer.disconnect("failed to request transaction");
return;
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing unconfirmed transactions from peer %s", peer), e);
}
}
}

View File

@@ -180,9 +180,6 @@ public class ArbitraryDataCleanupManager extends Thread {
arbitraryTransactionData.getName(), Base58.encode(signature)));
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
// We should also remove peers for this transaction from the lookup table to save space
this.removePeersHostingTransactionData(repository, arbitraryTransactionData);
continue;
}
@@ -437,16 +434,6 @@ public class ArbitraryDataCleanupManager extends Thread {
return false;
}
private void removePeersHostingTransactionData(Repository repository, ArbitraryTransactionData transactionData) {
byte[] signature = transactionData.getSignature();
try {
repository.getArbitraryRepository().deleteArbitraryPeersWithSignature(signature);
repository.saveChanges();
} catch (DataException e) {
LOGGER.debug("Unable to delete peers from lookup table for signature: {}", Base58.encode(signature));
}
}
private void cleanupTempDirectory(String folder, long now, long minAge) {
String baseDir = Settings.getInstance().getTempDataPath();
Path tempDir = Paths.get(baseDir, folder);

View File

@@ -5,6 +5,8 @@ import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataFileChunk;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryDirectConnectionInfo;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.arbitrary.ArbitraryRelayInfo;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
@@ -23,6 +25,8 @@ import org.qortal.utils.Triple;
import java.util.*;
import static org.qortal.controller.arbitrary.ArbitraryDataFileManager.MAX_FILE_HASH_RESPONSES;
public class ArbitraryDataFileListManager {
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFileListManager.class);
@@ -264,7 +268,7 @@ public class ArbitraryDataFileListManager {
}
this.addToSignatureRequests(signature58, true, false);
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
List<byte[]> missingHashes = null;
// Find hashes that we are missing
@@ -279,8 +283,11 @@ public class ArbitraryDataFileListManager {
LOGGER.debug(String.format("Sending data file list request for signature %s with %d hashes to %d peers...", signature58, hashCount, handshakedPeers.size()));
// FUTURE: send our address as requestingPeer once enough peers have switched to the new protocol
String requestingPeer = null; // Network.getInstance().getOurExternalIpAddressAndPort();
// Build request
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, missingHashes, now, 0);
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, missingHashes, now, 0, requestingPeer);
// Save our request into requests map
Triple<String, Peer, Long> requestEntry = new Triple<>(signature58, null, NTP.getTime());
@@ -338,7 +345,7 @@ public class ArbitraryDataFileListManager {
// This could be optimized in the future
long timestamp = now - 60000L;
List<byte[]> hashes = null;
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, hashes, timestamp, 0);
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, hashes, timestamp, 0, null);
// Save our request into requests map
Triple<String, Peer, Long> requestEntry = new Triple<>(signature58, null, NTP.getTime());
@@ -431,7 +438,6 @@ public class ArbitraryDataFileListManager {
}
ArbitraryTransactionData arbitraryTransactionData = null;
ArbitraryDataFileManager arbitraryDataFileManager = ArbitraryDataFileManager.getInstance();
// Check transaction exists and hashes are correct
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -458,16 +464,28 @@ public class ArbitraryDataFileListManager {
// }
if (!isRelayRequest || !Settings.getInstance().isRelayModeEnabled()) {
// Keep track of the hashes this peer reports to have access to
Long now = NTP.getTime();
for (byte[] hash : hashes) {
String hash58 = Base58.encode(hash);
String sig58 = Base58.encode(signature);
ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.put(hash58, new Triple<>(peer, sig58, now));
if (ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.size() < MAX_FILE_HASH_RESPONSES) {
// Keep track of the hashes this peer reports to have access to
for (byte[] hash : hashes) {
String hash58 = Base58.encode(hash);
// Treat null request hops as 100, so that they are able to be sorted (and put to the end of the list)
int requestHops = arbitraryDataFileListMessage.getRequestHops() != null ? arbitraryDataFileListMessage.getRequestHops() : 100;
ArbitraryFileListResponseInfo responseInfo = new ArbitraryFileListResponseInfo(hash58, signature58,
peer, now, arbitraryDataFileListMessage.getRequestTime(), requestHops);
ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.add(responseInfo);
}
}
// Go and fetch the actual data, since this isn't a relay request
arbitraryDataFileManager.fetchArbitraryDataFiles(repository, peer, signature, arbitraryTransactionData, hashes);
// Keep track of the source peer, for direct connections
if (arbitraryDataFileListMessage.getPeerAddress() != null) {
ArbitraryDataFileManager.getInstance().addDirectConnectionInfoIfUnique(
new ArbitraryDirectConnectionInfo(signature, arbitraryDataFileListMessage.getPeerAddress(), hashes, now));
}
}
} catch (DataException e) {
@@ -523,7 +541,6 @@ public class ArbitraryDataFileListManager {
GetArbitraryDataFileListMessage getArbitraryDataFileListMessage = (GetArbitraryDataFileListMessage) message;
byte[] signature = getArbitraryDataFileListMessage.getSignature();
String signature58 = Base58.encode(signature);
List<byte[]> requestedHashes = getArbitraryDataFileListMessage.getHashes();
Long now = NTP.getTime();
Triple<String, Peer, Long> newEntry = new Triple<>(signature58, peer, now);
@@ -533,7 +550,16 @@ public class ArbitraryDataFileListManager {
return;
}
LOGGER.debug("Received hash list request from peer {} for signature {}", peer, signature58);
List<byte[]> requestedHashes = getArbitraryDataFileListMessage.getHashes();
int hashCount = requestedHashes != null ? requestedHashes.size() : 0;
String requestingPeer = getArbitraryDataFileListMessage.getRequestingPeer();
if (requestingPeer != null) {
LOGGER.debug("Received hash list request with {} hashes from peer {} (requesting peer {}) for signature {}", hashCount, peer, requestingPeer, signature58);
}
else {
LOGGER.debug("Received hash list request with {} hashes from peer {} for signature {}", hashCount, peer, signature58);
}
List<byte[]> hashes = new ArrayList<>();
ArbitraryTransactionData transactionData = null;
@@ -612,7 +638,7 @@ public class ArbitraryDataFileListManager {
arbitraryDataFileListRequests.put(message.getId(), newEntry);
}
String ourAddress = Network.getInstance().getOurExternalIpAddress();
String ourAddress = Network.getInstance().getOurExternalIpAddressAndPort();
ArbitraryDataFileListMessage arbitraryDataFileListMessage = new ArbitraryDataFileListMessage(signature,
hashes, NTP.getTime(), 0, ourAddress, true);
arbitraryDataFileListMessage.setId(message.getId());

View File

@@ -4,6 +4,8 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryDirectConnectionInfo;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.arbitrary.ArbitraryRelayInfo;
import org.qortal.data.network.ArbitraryPeerData;
import org.qortal.data.network.PeerData;
@@ -18,7 +20,6 @@ import org.qortal.settings.Settings;
import org.qortal.utils.ArbitraryTransactionUtils;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.security.SecureRandom;
import java.util.*;
@@ -45,11 +46,17 @@ public class ArbitraryDataFileManager extends Thread {
public List<ArbitraryRelayInfo> arbitraryRelayMap = Collections.synchronizedList(new ArrayList<>());
/**
* Map to keep track of any arbitrary data file hash responses
* Key: string - the hash encoded in base58
* Value: Triple<respondingPeer, signature58, timeResponded>
* List to keep track of any arbitrary data file hash responses
*/
public Map<String, Triple<Peer, String, Long>> arbitraryDataFileHashResponses = Collections.synchronizedMap(new HashMap<>());
public final List<ArbitraryFileListResponseInfo> arbitraryDataFileHashResponses = Collections.synchronizedList(new ArrayList<>());
/**
* List to keep track of peers potentially available for direct connections, based on recent requests
*/
private List<ArbitraryDirectConnectionInfo> directConnectionInfo = Collections.synchronizedList(new ArrayList<>());
public static int MAX_FILE_HASH_RESPONSES = 1000;
private ArbitraryDataFileManager() {
@@ -98,7 +105,10 @@ public class ArbitraryDataFileManager extends Thread {
final long relayMinimumTimestamp = now - ArbitraryDataManager.getInstance().ARBITRARY_RELAY_TIMEOUT;
arbitraryRelayMap.removeIf(entry -> entry == null || entry.getTimestamp() == null || entry.getTimestamp() < relayMinimumTimestamp);
arbitraryDataFileHashResponses.entrySet().removeIf(entry -> entry.getValue().getC() == null || entry.getValue().getC() < relayMinimumTimestamp);
arbitraryDataFileHashResponses.removeIf(entry -> entry.getTimestamp() < relayMinimumTimestamp);
final long directConnectionInfoMinimumTimestamp = now - ArbitraryDataManager.getInstance().ARBITRARY_DIRECT_CONNECTION_INFO_TIMEOUT;
directConnectionInfo.removeIf(entry -> entry.getTimestamp() < directConnectionInfoMinimumTimestamp);
}
@@ -158,16 +168,6 @@ public class ArbitraryDataFileManager extends Thread {
}
if (receivedAtLeastOneFile) {
// Update our lookup table to indicate that this peer holds data for this signature
String peerAddress = peer.getPeerData().getAddress().toString();
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peer);
repository.discardChanges();
if (arbitraryPeerData.isPeerAddressValid()) {
LOGGER.debug("Adding arbitrary peer: {} for signature {}", peerAddress, Base58.encode(signature));
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
}
// Invalidate the hosted transactions cache as we are now hosting something new
ArbitraryDataStorageManager.getInstance().invalidateHostedTransactionsCache();
@@ -177,16 +177,7 @@ public class ArbitraryDataFileManager extends Thread {
// We have all the chunks for this transaction, so we should invalidate the transaction's name's
// data cache so that it is rebuilt the next time we serve it
ArbitraryDataManager.getInstance().invalidateCache(arbitraryTransactionData);
// We may also need to broadcast to the network that we are now hosting files for this transaction,
// but only if these files are in accordance with our storage policy
if (ArbitraryDataStorageManager.getInstance().canStoreData(arbitraryTransactionData)) {
// Use a null peer address to indicate our own
Message newArbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, Arrays.asList(signature));
Network.getInstance().broadcast(broadcastPeer -> newArbitrarySignatureMessage);
}
}
}
return receivedAtLeastOneFile;
@@ -296,89 +287,135 @@ public class ArbitraryDataFileManager extends Thread {
// Fetch data directly from peers
private List<ArbitraryDirectConnectionInfo> getDirectConnectionInfoForSignature(byte[] signature) {
synchronized (directConnectionInfo) {
return directConnectionInfo.stream().filter(i -> Arrays.equals(i.getSignature(), signature)).collect(Collectors.toList());
}
}
/**
* Add an ArbitraryDirectConnectionInfo item, but only if one with this peer-signature combination
* doesn't already exist.
* @param connectionInfo - the direct connection info to add
*/
public void addDirectConnectionInfoIfUnique(ArbitraryDirectConnectionInfo connectionInfo) {
boolean peerAlreadyExists;
synchronized (directConnectionInfo) {
peerAlreadyExists = directConnectionInfo.stream()
.anyMatch(i -> Arrays.equals(i.getSignature(), connectionInfo.getSignature())
&& Objects.equals(i.getPeerAddress(), connectionInfo.getPeerAddress()));
}
if (!peerAlreadyExists) {
directConnectionInfo.add(connectionInfo);
}
}
private void removeDirectConnectionInfo(ArbitraryDirectConnectionInfo connectionInfo) {
this.directConnectionInfo.remove(connectionInfo);
}
public boolean fetchDataFilesFromPeersForSignature(byte[] signature) {
String signature58 = Base58.encode(signature);
ArbitraryDataFileListManager.getInstance().addToSignatureRequests(signature58, false, true);
// Firstly fetch peers that claim to be hosting files for this signature
try (final Repository repository = RepositoryManager.getRepository()) {
boolean success = false;
List<ArbitraryPeerData> peers = repository.getArbitraryRepository().getArbitraryPeerDataForSignature(signature);
if (peers == null || peers.isEmpty()) {
LOGGER.debug("No peers found for signature {}", signature58);
return false;
}
LOGGER.debug("Attempting a direct peer connection for signature {}...", signature58);
// Peers found, so pick a random one and request data from it
int index = new SecureRandom().nextInt(peers.size());
ArbitraryPeerData arbitraryPeerData = peers.get(index);
String peerAddressString = arbitraryPeerData.getPeerAddress();
boolean success = Network.getInstance().requestDataFromPeer(peerAddressString, signature);
// Parse the peer address to find the host and port
String host = null;
int port = -1;
String[] parts = peerAddressString.split(":");
if (parts.length > 1) {
host = parts[0];
port = Integer.parseInt(parts[1]);
}
// If unsuccessful, and using a non-standard port, try a second connection with the default listen port,
// since almost all nodes use that. This is a workaround to account for any ephemeral ports that may
// have made it into the dataset.
if (!success) {
if (host != null && port > 0) {
int defaultPort = Settings.getInstance().getDefaultListenPort();
if (port != defaultPort) {
String newPeerAddressString = String.format("%s:%d", host, defaultPort);
success = Network.getInstance().requestDataFromPeer(newPeerAddressString, signature);
}
try {
while (!success) {
if (isStopping) {
return false;
}
}
Thread.sleep(500L);
// If _still_ unsuccessful, try matching the peer's IP address with some known peers, and then connect
// to each of those in turn until one succeeds.
if (!success) {
if (host != null) {
final String finalHost = host;
List<PeerData> knownPeers = Network.getInstance().getAllKnownPeers().stream()
.filter(knownPeerData -> knownPeerData.getAddress().getHost().equals(finalHost))
.collect(Collectors.toList());
// Loop through each match and attempt a connection
for (PeerData matchingPeer : knownPeers) {
String matchingPeerAddress = matchingPeer.getAddress().toString();
success = Network.getInstance().requestDataFromPeer(matchingPeerAddress, signature);
if (success) {
// Successfully connected, so stop making connections
break;
// Firstly fetch peers that claim to be hosting files for this signature
List<ArbitraryDirectConnectionInfo> connectionInfoList = getDirectConnectionInfoForSignature(signature);
if (connectionInfoList == null || connectionInfoList.isEmpty()) {
LOGGER.debug("No remaining direct connection peers found for signature {}", signature58);
return false;
}
LOGGER.debug("Attempting a direct peer connection for signature {}...", signature58);
// Peers found, so pick one with the highest number of chunks
Comparator<ArbitraryDirectConnectionInfo> highestChunkCountFirstComparator =
Comparator.comparingInt(ArbitraryDirectConnectionInfo::getHashCount).reversed();
ArbitraryDirectConnectionInfo directConnectionInfo = connectionInfoList.stream()
.sorted(highestChunkCountFirstComparator).findFirst().orElse(null);
if (directConnectionInfo == null) {
return false;
}
// Remove from the list so that a different peer is tried next time
removeDirectConnectionInfo(directConnectionInfo);
String peerAddressString = directConnectionInfo.getPeerAddress();
// Parse the peer address to find the host and port
String host = null;
int port = -1;
String[] parts = peerAddressString.split(":");
if (parts.length > 1) {
host = parts[0];
port = Integer.parseInt(parts[1]);
} else {
// Assume no port included
host = peerAddressString;
// Use default listen port
port = Settings.getInstance().getDefaultListenPort();
}
String peerAddressStringWithPort = String.format("%s:%d", host, port);
success = Network.getInstance().requestDataFromPeer(peerAddressStringWithPort, signature);
int defaultPort = Settings.getInstance().getDefaultListenPort();
// If unsuccessful, and using a non-standard port, try a second connection with the default listen port,
// since almost all nodes use that. This is a workaround to account for any ephemeral ports that may
// have made it into the dataset.
if (!success) {
if (host != null && port > 0) {
if (port != defaultPort) {
String newPeerAddressString = String.format("%s:%d", host, defaultPort);
success = Network.getInstance().requestDataFromPeer(newPeerAddressString, signature);
}
}
}
}
// Keep track of the success or failure
arbitraryPeerData.markAsAttempted();
if (success) {
arbitraryPeerData.markAsRetrieved();
arbitraryPeerData.incrementSuccesses();
}
else {
arbitraryPeerData.incrementFailures();
}
repository.discardChanges();
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
// If _still_ unsuccessful, try matching the peer's IP address with some known peers, and then connect
// to each of those in turn until one succeeds.
if (!success) {
if (host != null) {
final String finalHost = host;
List<PeerData> knownPeers = Network.getInstance().getAllKnownPeers().stream()
.filter(knownPeerData -> knownPeerData.getAddress().getHost().equals(finalHost))
.collect(Collectors.toList());
// Loop through each match and attempt a connection
for (PeerData matchingPeer : knownPeers) {
String matchingPeerAddress = matchingPeer.getAddress().toString();
int matchingPeerPort = matchingPeer.getAddress().getPort();
// Make sure that it's not a port we've already tried
if (matchingPeerPort != port && matchingPeerPort != defaultPort) {
success = Network.getInstance().requestDataFromPeer(matchingPeerAddress, signature);
if (success) {
// Successfully connected, so stop making connections
break;
}
}
}
}
}
return success;
if (success) {
// We were able to connect with a peer, so track the request
ArbitraryDataFileListManager.getInstance().addToSignatureRequests(signature58, false, true);
}
} catch (DataException e) {
LOGGER.debug("Unable to fetch peer list from repository");
}
} catch (InterruptedException e) {
// Do nothing
}
return false;
return success;
}

View File

@@ -3,6 +3,7 @@ package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.network.Peer;
import org.qortal.repository.DataException;
@@ -11,11 +12,9 @@ import org.qortal.repository.RepositoryManager;
import org.qortal.utils.ArbitraryTransactionUtils;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.*;
import java.util.stream.Collectors;
public class ArbitraryDataFileRequestThread implements Runnable {
@@ -51,45 +50,47 @@ public class ArbitraryDataFileRequestThread implements Runnable {
boolean shouldProcess = false;
synchronized (arbitraryDataFileManager.arbitraryDataFileHashResponses) {
Iterator iterator = arbitraryDataFileManager.arbitraryDataFileHashResponses.entrySet().iterator();
while (iterator.hasNext()) {
if (Controller.isStopping()) {
return;
}
if (!arbitraryDataFileManager.arbitraryDataFileHashResponses.isEmpty()) {
Map.Entry entry = (Map.Entry) iterator.next();
if (entry == null || entry.getKey() == null || entry.getValue() == null) {
// Sort by lowest number of node hops first
Comparator<ArbitraryFileListResponseInfo> lowestHopsFirstComparator =
Comparator.comparingInt(ArbitraryFileListResponseInfo::getRequestHops);
arbitraryDataFileManager.arbitraryDataFileHashResponses.sort(lowestHopsFirstComparator);
Iterator iterator = arbitraryDataFileManager.arbitraryDataFileHashResponses.iterator();
while (iterator.hasNext()) {
if (Controller.isStopping()) {
return;
}
ArbitraryFileListResponseInfo responseInfo = (ArbitraryFileListResponseInfo) iterator.next();
if (responseInfo == null) {
iterator.remove();
continue;
}
hash58 = responseInfo.getHash58();
peer = responseInfo.getPeer();
signature58 = responseInfo.getSignature58();
Long timestamp = responseInfo.getTimestamp();
if (now - timestamp >= ArbitraryDataManager.ARBITRARY_RELAY_TIMEOUT || signature58 == null || peer == null) {
// Ignore - to be deleted
iterator.remove();
continue;
}
// Skip if already requesting, but don't remove, as we might want to retry later
if (arbitraryDataFileManager.arbitraryDataFileRequests.containsKey(hash58)) {
// Already requesting - leave this attempt for later
continue;
}
// We want to process this file
shouldProcess = true;
iterator.remove();
continue;
break;
}
hash58 = (String) entry.getKey();
Triple<Peer, String, Long> value = (Triple<Peer, String, Long>) entry.getValue();
if (value == null) {
iterator.remove();
continue;
}
peer = value.getA();
signature58 = value.getB();
Long timestamp = value.getC();
if (now - timestamp >= ArbitraryDataManager.ARBITRARY_RELAY_TIMEOUT || signature58 == null || peer == null) {
// Ignore - to be deleted
iterator.remove();
continue;
}
// Skip if already requesting, but don't remove, as we might want to retry later
if (arbitraryDataFileManager.arbitraryDataFileRequests.containsKey(hash58)) {
// Already requesting - leave this attempt for later
continue;
}
// We want to process this file
shouldProcess = true;
iterator.remove();
break;
}
}

View File

@@ -1,8 +1,10 @@
package org.qortal.controller.arbitrary;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -12,13 +14,11 @@ import org.qortal.arbitrary.ArbitraryDataResource;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.data.network.ArbitraryPeerData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.list.ResourceListManager;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -44,9 +44,18 @@ public class ArbitraryDataManager extends Thread {
/** Maximum time to hold information about an in-progress relay */
public static final long ARBITRARY_RELAY_TIMEOUT = 60 * 1000L; // ms
/** Maximum time to hold direct peer connection information */
public static final long ARBITRARY_DIRECT_CONNECTION_INFO_TIMEOUT = 2 * 60 * 1000L; // ms
/** Maximum number of hops that an arbitrary signatures request is allowed to make */
private static int ARBITRARY_SIGNATURES_REQUEST_MAX_HOPS = 3;
private long lastMetadataFetchTime = 0L;
private static long METADATA_FETCH_INTERVAL = 5 * 60 * 1000L;
private long lastDataFetchTime = 0L;
private static long DATA_FETCH_INTERVAL = 1 * 60 * 1000L;
private static ArbitraryDataManager instance;
private final Object peerDataLock = new Object();
@@ -80,6 +89,9 @@ public class ArbitraryDataManager extends Thread {
public void run() {
Thread.currentThread().setName("Arbitrary Data Manager");
// Create data directory in case it doesn't exist yet
this.createDataDirectory();
try {
// Wait for node to finish starting up and making connections
Thread.sleep(2 * 60 * 1000L);
@@ -93,7 +105,13 @@ public class ArbitraryDataManager extends Thread {
continue;
}
List<Peer> peers = Network.getInstance().getHandshakedPeers();
Long now = NTP.getTime();
if (now == null) {
continue;
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
@@ -104,8 +122,19 @@ public class ArbitraryDataManager extends Thread {
}
// Fetch metadata
// Disabled for now. TODO: re-enable later.
// this.fetchAllMetadata();
if (NTP.getTime() - lastMetadataFetchTime >= METADATA_FETCH_INTERVAL) {
this.fetchAllMetadata();
lastMetadataFetchTime = NTP.getTime();
}
// Check if we need to fetch any data
if (NTP.getTime() - lastDataFetchTime < DATA_FETCH_INTERVAL) {
// Nothing to do yet
continue;
}
// In case the data directory has been deleted...
this.createDataDirectory();
// Fetch data according to storage policy
switch (Settings.getInstance().getStoragePolicy()) {
@@ -116,6 +145,7 @@ public class ArbitraryDataManager extends Thread {
case ALL:
this.processAll();
break;
case NONE:
case VIEWED:
@@ -124,6 +154,8 @@ public class ArbitraryDataManager extends Thread {
Thread.sleep(60000);
break;
}
lastDataFetchTime = NTP.getTime();
}
} catch (InterruptedException e) {
// Fall-through to exit thread...
@@ -135,7 +167,7 @@ public class ArbitraryDataManager extends Thread {
this.interrupt();
}
private void processNames() {
private void processNames() throws InterruptedException {
// Fetch latest list of followed names
List<String> followedNames = ResourceListManager.getInstance().getStringsInList("followedNames");
if (followedNames == null || followedNames.isEmpty()) {
@@ -148,11 +180,11 @@ public class ArbitraryDataManager extends Thread {
}
}
private void processAll() {
private void processAll() throws InterruptedException {
this.fetchAndProcessTransactions(null);
}
private void fetchAndProcessTransactions(String name) {
private void fetchAndProcessTransactions(String name) throws InterruptedException {
ArbitraryDataStorageManager storageManager = ArbitraryDataStorageManager.getInstance();
// Paginate queries when fetching arbitrary transactions
@@ -160,6 +192,7 @@ public class ArbitraryDataManager extends Thread {
int offset = 0;
while (!isStopping) {
Thread.sleep(1000L);
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -174,6 +207,7 @@ public class ArbitraryDataManager extends Thread {
// Loop through signatures and remove ones we don't need to process
Iterator iterator = signatures.iterator();
while (iterator.hasNext()) {
Thread.sleep(25L); // Reduce CPU usage
byte[] signature = (byte[]) iterator.next();
ArbitraryTransaction arbitraryTransaction = fetchTransaction(repository, signature);
@@ -230,7 +264,7 @@ public class ArbitraryDataManager extends Thread {
}
}
private void fetchAllMetadata() {
private void fetchAllMetadata() throws InterruptedException {
ArbitraryDataStorageManager storageManager = ArbitraryDataStorageManager.getInstance();
// Paginate queries when fetching arbitrary transactions
@@ -238,6 +272,7 @@ public class ArbitraryDataManager extends Thread {
int offset = 0;
while (!isStopping) {
Thread.sleep(1000L);
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -252,6 +287,7 @@ public class ArbitraryDataManager extends Thread {
// Loop through signatures and remove ones we don't need to process
Iterator iterator = signatures.iterator();
while (iterator.hasNext()) {
Thread.sleep(25L); // Reduce CPU usage
byte[] signature = (byte[]) iterator.next();
ArbitraryTransaction arbitraryTransaction = fetchTransaction(repository, signature);
@@ -335,6 +371,12 @@ public class ArbitraryDataManager extends Thread {
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
byte[] signature = arbitraryTransactionData.getSignature();
byte[] metadataHash = arbitraryTransactionData.getMetadataHash();
if (metadataHash == null) {
// This transaction doesn't have metadata associated with it, so return true to indicate that we have everything
return true;
}
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
return metadataFile.exists();
@@ -476,95 +518,19 @@ public class ArbitraryDataManager extends Thread {
}
}
// Broadcast list of hosted signatures
public void broadcastHostedSignatureList() {
try (final Repository repository = RepositoryManager.getRepository()) {
List<ArbitraryTransactionData> hostedTransactions = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, null, null);
List<byte[]> hostedSignatures = hostedTransactions.stream().map(ArbitraryTransactionData::getSignature).collect(Collectors.toList());
if (!hostedSignatures.isEmpty()) {
// Broadcast the list, using null to represent our peer address
LOGGER.info("Broadcasting list of hosted signatures...");
Message arbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, hostedSignatures);
Network.getInstance().broadcast(broadcastPeer -> arbitrarySignatureMessage);
}
} catch (DataException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction data for broadcast", e);
private boolean createDataDirectory() {
// Create the data directory if it doesn't exist
String dataPath = Settings.getInstance().getDataPath();
Path dataDirectory = Paths.get(dataPath);
try {
Files.createDirectories(dataDirectory);
} catch (IOException e) {
LOGGER.error("Unable to create data directory");
return false;
}
return true;
}
// Handle incoming arbitrary signatures messages
public void onNetworkArbitrarySignaturesMessage(Peer peer, Message message) {
// Don't process if QDN is disabled
if (!Settings.getInstance().isQdnEnabled()) {
return;
}
LOGGER.debug("Received arbitrary signature list from peer {}", peer);
ArbitrarySignaturesMessage arbitrarySignaturesMessage = (ArbitrarySignaturesMessage) message;
List<byte[]> signatures = arbitrarySignaturesMessage.getSignatures();
String peerAddress = peer.getPeerData().getAddress().toString();
if (arbitrarySignaturesMessage.getPeerAddress() != null && !arbitrarySignaturesMessage.getPeerAddress().isEmpty()) {
// This message is about a different peer than the one that sent it
peerAddress = arbitrarySignaturesMessage.getPeerAddress();
}
boolean containsNewEntry = false;
// Synchronize peer data lookups to make this process thread safe. Otherwise we could broadcast
// the same data multiple times, due to more than one thread processing the same message from different peers
synchronized (this.peerDataLock) {
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
// Check if a record already exists for this hash/host combination
// The port is not checked here - only the host/ip - in order to avoid duplicates
// from filling up the db due to dynamic/ephemeral ports
ArbitraryPeerData existingEntry = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, peer.getPeerData().getAddress().getHost());
if (existingEntry == null) {
// We haven't got a record of this mapping yet, so add it
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peerAddress);
repository.discardChanges();
if (arbitraryPeerData.isPeerAddressValid()) {
LOGGER.debug("Adding arbitrary peer: {} for signature {}", peerAddress, Base58.encode(signature));
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
// Remember that this data is new, so that it can be rebroadcast later
containsNewEntry = true;
}
}
}
// If at least one signature in this batch was new to us, we should rebroadcast the message to the
// network in case some peers haven't received it yet
if (containsNewEntry) {
int requestHops = arbitrarySignaturesMessage.getRequestHops();
arbitrarySignaturesMessage.setRequestHops(++requestHops);
if (requestHops < ARBITRARY_SIGNATURES_REQUEST_MAX_HOPS) {
LOGGER.debug("Rebroadcasting arbitrary signature list for peer {}. requestHops: {}", peerAddress, requestHops);
Network.getInstance().broadcast(broadcastPeer -> broadcastPeer == peer ? null : arbitrarySignaturesMessage);
}
} else {
// Don't rebroadcast as otherwise we could get into a loop
}
// If anything needed saving, it would already have called saveChanges() above
repository.discardChanges();
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing arbitrary transaction signature list from peer %s", peer), e);
}
}
}
public int getPowDifficulty() {
return this.powDifficulty;
}

View File

@@ -139,7 +139,7 @@ public class ArbitraryMetadataManager {
}
this.addToSignatureRequests(signature58, true, false);
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
LOGGER.debug(String.format("Sending metadata request for signature %s to %d peers...", signature58, handshakedPeers.size()));
// Build request
@@ -183,7 +183,9 @@ public class ArbitraryMetadataManager {
try {
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
return metadataFile.getBytes();
if (metadataFile.exists()) {
return metadataFile.getBytes();
}
} catch (DataException e) {
// Do nothing
}

View File

@@ -29,6 +29,15 @@ public class NamesDatabaseIntegrityCheck {
private List<TransactionData> nameTransactions = new ArrayList<>();
public int rebuildName(String name, Repository repository) {
return this.rebuildName(name, repository, null);
}
public int rebuildName(String name, Repository repository, List<String> referenceNames) {
// "referenceNames" tracks the linked names that have already been rebuilt, to prevent circular dependencies
if (referenceNames == null) {
referenceNames = new ArrayList<>();
}
int modificationCount = 0;
try {
List<TransactionData> transactions = this.fetchAllTransactionsInvolvingName(name, repository);
@@ -56,7 +65,14 @@ public class NamesDatabaseIntegrityCheck {
if (Objects.equals(updateNameTransactionData.getNewName(), name) &&
!Objects.equals(updateNameTransactionData.getName(), updateNameTransactionData.getNewName())) {
// This renames an existing name, so we need to process that instead
this.rebuildName(updateNameTransactionData.getName(), repository);
if (!referenceNames.contains(name)) {
referenceNames.add(name);
this.rebuildName(updateNameTransactionData.getName(), repository, referenceNames);
}
else {
// We've already processed this name so there's nothing more to do
}
}
else {
Name nameObj = new Name(repository, name);
@@ -193,7 +209,12 @@ public class NamesDatabaseIntegrityCheck {
newName = registeredName;
}
NameData newNameData = repository.getNameRepository().fromName(newName);
if (!Objects.equals(creator.getAddress(), newNameData.getOwner())) {
if (newNameData == null) {
LOGGER.info("Error: registered name {} has no new name data. This is likely due to account {} " +
"being renamed another time, which is a scenario that is not yet checked automatically.",
updateNameTransactionData.getNewName(), creator.getAddress());
}
else if (!Objects.equals(creator.getAddress(), newNameData.getOwner())) {
LOGGER.info("Error: registered name {} is owned by {}, but it should be {}",
updateNameTransactionData.getNewName(), newNameData.getOwner(), creator.getAddress());
integrityCheckFailed = true;
@@ -313,6 +334,10 @@ public class NamesDatabaseIntegrityCheck {
transactions.add(transactionData);
}
}
// Sort by lowest timestamp first
transactions.sort(Comparator.comparingLong(TransactionData::getTimestamp));
return transactions;
}

View File

@@ -0,0 +1,59 @@
package org.qortal.data.arbitrary;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
public class ArbitraryDirectConnectionInfo {
private final byte[] signature;
private final String peerAddress;
private final List<byte[]> hashes;
private final long timestamp;
public ArbitraryDirectConnectionInfo(byte[] signature, String peerAddress, List<byte[]> hashes, long timestamp) {
this.signature = signature;
this.peerAddress = peerAddress;
this.hashes = hashes;
this.timestamp = timestamp;
}
public byte[] getSignature() {
return this.signature;
}
public String getPeerAddress() {
return this.peerAddress;
}
public List<byte[]> getHashes() {
return this.hashes;
}
public long getTimestamp() {
return this.timestamp;
}
public int getHashCount() {
if (this.hashes == null) {
return 0;
}
return this.hashes.size();
}
@Override
public boolean equals(Object other) {
if (other == this)
return true;
if (!(other instanceof ArbitraryDirectConnectionInfo))
return false;
ArbitraryDirectConnectionInfo otherDirectConnectionInfo = (ArbitraryDirectConnectionInfo) other;
return Arrays.equals(this.signature, otherDirectConnectionInfo.getSignature())
&& Objects.equals(this.peerAddress, otherDirectConnectionInfo.getPeerAddress())
&& Objects.equals(this.hashes, otherDirectConnectionInfo.getHashes())
&& Objects.equals(this.timestamp, otherDirectConnectionInfo.getTimestamp());
}
}

View File

@@ -0,0 +1,11 @@
package org.qortal.data.arbitrary;
import org.qortal.network.Peer;
public class ArbitraryFileListResponseInfo extends ArbitraryRelayInfo {
public ArbitraryFileListResponseInfo(String hash58, String signature58, Peer peer, Long timestamp, Long requestTime, Integer requestHops) {
super(hash58, signature58, peer, timestamp, requestTime, requestHops);
}
}

View File

@@ -48,6 +48,7 @@ public class UpdateNameTransactionData extends TransactionData {
public void afterUnmarshal(Unmarshaller u, Object parent) {
this.creatorPublicKey = this.ownerPublicKey;
this.reducedNewName = this.newName != null ? Unicode.sanitize(this.newName) : null;
}
/** From repository */
@@ -62,7 +63,7 @@ public class UpdateNameTransactionData extends TransactionData {
this.nameReference = nameReference;
}
/** From network/API */
/** From network */
public UpdateNameTransactionData(BaseTransactionData baseTransactionData, String name, String newName, String newData) {
this(baseTransactionData, name, newName, newData, Unicode.sanitize(newName), null);
}

View File

@@ -100,7 +100,23 @@ public class Network {
private long nextDisconnectionCheck = 0L;
private final List<PeerData> allKnownPeers = new ArrayList<>();
private final List<Peer> connectedPeers = new ArrayList<>();
/**
* Maintain two lists for each subset of peers:
* - A synchronizedList, to be modified when peers are added/removed
* - An immutable List, which is rebuilt automatically to mirror the synchronized list, and is then served to consumers
* This allows for thread safety without having to synchronize every time a thread requests a peer list
*/
private final List<Peer> connectedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableConnectedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<Peer> handshakedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableHandshakedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<Peer> outboundHandshakedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableOutboundHandshakedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<PeerAddress> selfPeers = new ArrayList<>();
private final ExecuteProduceConsume networkEPC;
@@ -119,6 +135,7 @@ public class Network {
private List<String> ourExternalIpAddressHistory = new ArrayList<>();
private String ourExternalIpAddress = null;
private int ourExternalPort = Settings.getInstance().getListenPort();
// Constructors
@@ -236,10 +253,21 @@ public class Network {
}
}
public List<Peer> getConnectedPeers() {
synchronized (this.connectedPeers) {
return new ArrayList<>(this.connectedPeers);
}
public List<Peer> getImmutableConnectedPeers() {
return this.immutableConnectedPeers;
}
public void addConnectedPeer(Peer peer) {
this.connectedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableConnectedPeers = List.copyOf(this.connectedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public void removeConnectedPeer(Peer peer) {
// Firstly remove from handshaked peers
this.removeHandshakedPeer(peer);
this.connectedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableConnectedPeers = List.copyOf(this.connectedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public List<PeerAddress> getSelfPeers() {
@@ -274,16 +302,14 @@ public class Network {
}
// Check if we're already connected to and handshaked with this peer
Peer connectedPeer = null;
synchronized (this.connectedPeers) {
connectedPeer = this.connectedPeers.stream()
Peer connectedPeer = this.getImmutableConnectedPeers().stream()
.filter(p -> p.getPeerData().getAddress().equals(peerAddress))
.findFirst()
.orElse(null);
}
boolean isConnected = (connectedPeer != null);
boolean isHandshaked = this.getHandshakedPeers().stream()
boolean isHandshaked = this.getImmutableHandshakedPeers().stream()
.anyMatch(p -> p.getPeerData().getAddress().equals(peerAddress));
if (isConnected && isHandshaked) {
@@ -327,35 +353,61 @@ public class Network {
/**
* Returns list of connected peers that have completed handshaking.
*/
public List<Peer> getHandshakedPeers() {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED)
.collect(Collectors.toList());
public List<Peer> getImmutableHandshakedPeers() {
return this.immutableHandshakedPeers;
}
public void addHandshakedPeer(Peer peer) {
this.handshakedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableHandshakedPeers = List.copyOf(this.handshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
// Also add to outbound handshaked peers cache
if (peer.isOutbound()) {
this.addOutboundHandshakedPeer(peer);
}
}
public void removeHandshakedPeer(Peer peer) {
this.handshakedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableHandshakedPeers = List.copyOf(this.handshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
// Also remove from outbound handshaked peers cache
if (peer.isOutbound()) {
this.removeOutboundHandshakedPeer(peer);
}
}
/**
* Returns list of peers we connected to that have completed handshaking.
*/
public List<Peer> getOutboundHandshakedPeers() {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.isOutbound() && peer.getHandshakeStatus() == Handshake.COMPLETED)
.collect(Collectors.toList());
public List<Peer> getImmutableOutboundHandshakedPeers() {
return this.immutableOutboundHandshakedPeers;
}
public void addOutboundHandshakedPeer(Peer peer) {
if (!peer.isOutbound()) {
return;
}
this.outboundHandshakedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableOutboundHandshakedPeers = List.copyOf(this.outboundHandshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public void removeOutboundHandshakedPeer(Peer peer) {
if (!peer.isOutbound()) {
return;
}
this.outboundHandshakedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableOutboundHandshakedPeers = List.copyOf(this.outboundHandshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
/**
* Returns first peer that has completed handshaking and has matching public key.
*/
public Peer getHandshakedPeerWithPublicKey(byte[] publicKey) {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
&& Arrays.equals(peer.getPeersPublicKey(), publicKey))
.findFirst().orElse(null);
}
return this.getImmutableConnectedPeers().stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
&& Arrays.equals(peer.getPeersPublicKey(), publicKey))
.findFirst().orElse(null);
}
// Peer list filters
@@ -368,21 +420,15 @@ public class Network {
return this.selfPeers.stream().anyMatch(selfPeer -> selfPeer.equals(peerAddress));
};
/**
* Must be inside <tt>synchronized (this.connectedPeers) {...}</tt>
*/
private final Predicate<PeerData> isConnectedPeer = peerData -> {
PeerAddress peerAddress = peerData.getAddress();
return this.connectedPeers.stream().anyMatch(peer -> peer.getPeerData().getAddress().equals(peerAddress));
return this.getImmutableConnectedPeers().stream().anyMatch(peer -> peer.getPeerData().getAddress().equals(peerAddress));
};
/**
* Must be inside <tt>synchronized (this.connectedPeers) {...}</tt>
*/
private final Predicate<PeerData> isResolvedAsConnectedPeer = peerData -> {
try {
InetSocketAddress resolvedSocketAddress = peerData.getAddress().toSocketAddress();
return this.connectedPeers.stream()
return this.getImmutableConnectedPeers().stream()
.anyMatch(peer -> peer.getResolvedAddress().equals(resolvedSocketAddress));
} catch (UnknownHostException e) {
// Can't resolve - no point even trying to connect
@@ -448,7 +494,7 @@ public class Network {
}
private Task maybeProducePeerMessageTask() {
for (Peer peer : getConnectedPeers()) {
for (Peer peer : getImmutableConnectedPeers()) {
Task peerTask = peer.getMessageTask();
if (peerTask != null) {
return peerTask;
@@ -460,7 +506,7 @@ public class Network {
private Task maybeProducePeerPingTask(Long now) {
// Ask connected peers whether they need a ping
for (Peer peer : getHandshakedPeers()) {
for (Peer peer : getImmutableHandshakedPeers()) {
Task peerTask = peer.getPingTask(now);
if (peerTask != null) {
return peerTask;
@@ -488,7 +534,7 @@ public class Network {
return null;
}
if (getOutboundHandshakedPeers().size() >= minOutboundPeers) {
if (getImmutableOutboundHandshakedPeers().size() >= minOutboundPeers) {
return null;
}
@@ -641,19 +687,18 @@ public class Network {
return;
}
synchronized (this.connectedPeers) {
if (connectedPeers.size() >= maxPeers) {
// We have enough peers
LOGGER.debug("Connection discarded from peer {} because the server is full", address);
socketChannel.close();
return;
}
LOGGER.debug("Connection accepted from peer {}", address);
newPeer = new Peer(socketChannel, channelSelector);
this.connectedPeers.add(newPeer);
if (getImmutableConnectedPeers().size() >= maxPeers) {
// We have enough peers
LOGGER.debug("Connection discarded from peer {} because the server is full", address);
socketChannel.close();
return;
}
LOGGER.debug("Connection accepted from peer {}", address);
newPeer = new Peer(socketChannel, channelSelector);
this.addConnectedPeer(newPeer);
} catch (IOException e) {
if (socketChannel.isOpen()) {
try {
@@ -701,16 +746,14 @@ public class Network {
peers.removeIf(isSelfPeer);
}
synchronized (this.connectedPeers) {
// Don't consider already connected peers (simple address match)
peers.removeIf(isConnectedPeer);
// Don't consider already connected peers (simple address match)
peers.removeIf(isConnectedPeer);
// Don't consider already connected peers (resolved address match)
// XXX This might be too slow if we end up waiting a long time for hostnames to resolve via DNS
peers.removeIf(isResolvedAsConnectedPeer);
// Don't consider already connected peers (resolved address match)
// XXX This might be too slow if we end up waiting a long time for hostnames to resolve via DNS
peers.removeIf(isResolvedAsConnectedPeer);
this.checkLongestConnection(now);
}
this.checkLongestConnection(now);
// Any left?
if (peers.isEmpty()) {
@@ -748,21 +791,16 @@ public class Network {
return false;
}
synchronized (this.connectedPeers) {
this.connectedPeers.add(newPeer);
}
this.addConnectedPeer(newPeer);
this.onPeerReady(newPeer);
return true;
}
private Peer getPeerFromChannel(SocketChannel socketChannel) {
synchronized (this.connectedPeers) {
for (Peer peer : this.connectedPeers) {
if (peer.getSocketChannel() == socketChannel) {
return peer;
}
for (Peer peer : this.getImmutableConnectedPeers()) {
if (peer.getSocketChannel() == socketChannel) {
return peer;
}
}
@@ -775,7 +813,7 @@ public class Network {
}
// Find peers that have reached their maximum connection age, and disconnect them
List<Peer> peersToDisconnect = this.connectedPeers.stream()
List<Peer> peersToDisconnect = this.getImmutableConnectedPeers().stream()
.filter(peer -> !peer.isSyncInProgress())
.filter(peer -> peer.hasReachedMaxConnectionAge())
.collect(Collectors.toList());
@@ -826,9 +864,7 @@ public class Network {
LOGGER.debug("[{}] Failed to connect to peer {}", peer.getPeerConnectionId(), peer);
}
synchronized (this.connectedPeers) {
this.connectedPeers.remove(peer);
}
this.removeConnectedPeer(peer);
}
public void peerMisbehaved(Peer peer) {
@@ -989,6 +1025,9 @@ public class Network {
return;
}
// Add to handshaked peers cache
this.addHandshakedPeer(peer);
// Make a note that we've successfully completed handshake (and when)
peer.getPeerData().setLastConnected(NTP.getTime());
@@ -1128,6 +1167,7 @@ public class Network {
return;
}
String host = parts[0];
try {
InetAddress addr = InetAddress.getByName(host);
if (addr.isAnyLocalAddress() || addr.isSiteLocalAddress()) {
@@ -1138,6 +1178,9 @@ public class Network {
return;
}
// Keep track of the port
this.ourExternalPort = Integer.parseInt(parts[1]);
// Add to the list
this.ourExternalIpAddressHistory.add(host);
@@ -1191,8 +1234,6 @@ public class Network {
public void onExternalIpUpdate(String ipAddress) {
LOGGER.info("External IP address updated to {}", ipAddress);
//ArbitraryDataManager.getInstance().broadcastHostedSignatureList();
}
public String getOurExternalIpAddress() {
@@ -1200,6 +1241,14 @@ public class Network {
return this.ourExternalIpAddress;
}
public String getOurExternalIpAddressAndPort() {
String ipAddress = this.getOurExternalIpAddress();
if (ipAddress == null) {
return null;
}
return String.format("%s:%d", ipAddress, this.ourExternalPort);
}
// Peer-management calls
@@ -1241,7 +1290,7 @@ public class Network {
}
}
for (Peer peer : this.getConnectedPeers()) {
for (Peer peer : this.getImmutableConnectedPeers()) {
peer.disconnect("to be forgotten");
}
@@ -1253,7 +1302,7 @@ public class Network {
try {
InetSocketAddress knownAddress = peerAddress.toSocketAddress();
List<Peer> peers = this.getConnectedPeers();
List<Peer> peers = this.getImmutableConnectedPeers();
peers.removeIf(peer -> !Peer.addressEquals(knownAddress, peer.getResolvedAddress()));
for (Peer peer : peers) {
@@ -1273,7 +1322,8 @@ public class Network {
}
// Disconnect peers that are stuck during handshake
List<Peer> handshakePeers = this.getConnectedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> handshakePeers = new ArrayList<>(this.getImmutableConnectedPeers());
// Disregard peers that have completed handshake or only connected recently
handshakePeers.removeIf(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
@@ -1315,9 +1365,7 @@ public class Network {
peers.removeIf(isNotOldPeer);
// Don't consider already connected peers (simple address match)
synchronized (this.connectedPeers) {
peers.removeIf(isConnectedPeer);
}
peers.removeIf(isConnectedPeer);
for (PeerData peerData : peers) {
LOGGER.debug("Deleting old peer {} from repository", peerData.getAddress().toString());
@@ -1452,7 +1500,7 @@ public class Network {
}
try {
broadcastExecutor.execute(new Broadcaster(this.getHandshakedPeers(), peerMessageBuilder));
broadcastExecutor.execute(new Broadcaster(this.getImmutableHandshakedPeers(), peerMessageBuilder));
} catch (RejectedExecutionException e) {
// Can't execute - probably because we're shutting down, so ignore
}
@@ -1490,7 +1538,7 @@ public class Network {
}
// Close all peer connections
for (Peer peer : this.getConnectedPeers()) {
for (Peer peer : this.getImmutableConnectedPeers()) {
peer.shutdown();
}
}

View File

@@ -2,8 +2,11 @@ package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.data.network.PeerData;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Serialization;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
@@ -19,23 +22,26 @@ public class GetArbitraryDataFileListMessage extends Message {
private static final int SIGNATURE_LENGTH = Transformer.SIGNATURE_LENGTH;
private static final int HASH_LENGTH = TransactionTransformer.SHA256_LENGTH;
private static final int MAX_PEER_ADDRESS_LENGTH = PeerData.MAX_PEER_ADDRESS_SIZE;
private final byte[] signature;
private List<byte[]> hashes;
private final long requestTime;
private int requestHops;
private String requestingPeer;
public GetArbitraryDataFileListMessage(byte[] signature, List<byte[]> hashes, long requestTime, int requestHops) {
this(-1, signature, hashes, requestTime, requestHops);
public GetArbitraryDataFileListMessage(byte[] signature, List<byte[]> hashes, long requestTime, int requestHops, String requestingPeer) {
this(-1, signature, hashes, requestTime, requestHops, requestingPeer);
}
private GetArbitraryDataFileListMessage(int id, byte[] signature, List<byte[]> hashes, long requestTime, int requestHops) {
private GetArbitraryDataFileListMessage(int id, byte[] signature, List<byte[]> hashes, long requestTime, int requestHops, String requestingPeer) {
super(id, MessageType.GET_ARBITRARY_DATA_FILE_LIST);
this.signature = signature;
this.hashes = hashes;
this.requestTime = requestTime;
this.requestHops = requestHops;
this.requestingPeer = requestingPeer;
}
public byte[] getSignature() {
@@ -46,7 +52,7 @@ public class GetArbitraryDataFileListMessage extends Message {
return this.hashes;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException, TransformationException {
byte[] signature = new byte[SIGNATURE_LENGTH];
bytes.get(signature);
@@ -59,10 +65,6 @@ public class GetArbitraryDataFileListMessage extends Message {
if (bytes.hasRemaining()) {
int hashCount = bytes.getInt();
if (bytes.remaining() != hashCount * HASH_LENGTH) {
return null;
}
hashes = new ArrayList<>();
for (int i = 0; i < hashCount; ++i) {
byte[] hash = new byte[HASH_LENGTH];
@@ -71,7 +73,12 @@ public class GetArbitraryDataFileListMessage extends Message {
}
}
return new GetArbitraryDataFileListMessage(id, signature, hashes, requestTime, requestHops);
String requestingPeer = null;
if (bytes.hasRemaining()) {
requestingPeer = Serialization.deserializeSizedStringV2(bytes, MAX_PEER_ADDRESS_LENGTH);
}
return new GetArbitraryDataFileListMessage(id, signature, hashes, requestTime, requestHops, requestingPeer);
}
@Override
@@ -92,6 +99,13 @@ public class GetArbitraryDataFileListMessage extends Message {
bytes.write(hash);
}
}
else {
bytes.write(Ints.toByteArray(0));
}
if (this.requestingPeer != null) {
Serialization.serializeSizedStringV2(bytes, this.requestingPeer);
}
return bytes.toByteArray();
} catch (IOException e) {
@@ -110,4 +124,8 @@ public class GetArbitraryDataFileListMessage extends Message {
this.requestHops = requestHops;
}
public String getRequestingPeer() {
return this.requestingPeer;
}
}

View File

@@ -149,6 +149,8 @@ public interface AccountRepository {
public RewardShareData getRewardShare(byte[] rewardSharePublicKey) throws DataException;
public List<byte[]> getRewardSharePublicKeys() throws DataException;
public boolean isRewardSharePublicKey(byte[] publicKey) throws DataException;
/** Returns number of active reward-shares involving passed public key as the minting account only. */

View File

@@ -30,17 +30,4 @@ public interface ArbitraryRepository {
public List<ArbitraryResourceNameInfo> getArbitraryResourceCreatorNames(Service service, String identifier, boolean defaultResource, Integer limit, Integer offset, Boolean reverse) throws DataException;
public List<ArbitraryPeerData> getArbitraryPeerDataForSignature(byte[] signature) throws DataException;
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndPeer(byte[] signature, String peerAddress) throws DataException;
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndHost(byte[] signature, String host) throws DataException;
public void save(ArbitraryPeerData arbitraryPeerData) throws DataException;
public void delete(ArbitraryPeerData arbitraryPeerData) throws DataException;
public void deleteArbitraryPeersWithSignature(byte[] signature) throws DataException;
}

View File

@@ -23,7 +23,7 @@ import java.util.*;
public class BlockArchiveReader {
private static BlockArchiveReader instance;
private Map<String, Triple<Integer, Integer, Integer>> fileListCache = Collections.synchronizedMap(new HashMap<>());
private Map<String, Triple<Integer, Integer, Integer>> fileListCache;
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveReader.class);
@@ -63,11 +63,11 @@ public class BlockArchiveReader {
map.put(filename, new Triple(startHeight, endHeight, range));
}
}
this.fileListCache = map;
this.fileListCache = Map.copyOf(map);
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockAtHeight(int height) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -94,7 +94,7 @@ public class BlockArchiveReader {
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockWithSignature(
byte[] signature, Repository repository) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -145,22 +145,24 @@ public class BlockArchiveReader {
}
private String getFilenameForHeight(int height) {
synchronized (this.fileListCache) {
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry) it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (this.fileListCache == null) {
this.fetchFileList();
}
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry) it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
}
@@ -168,8 +170,7 @@ public class BlockArchiveReader {
}
public byte[] fetchSerializedBlockBytesForSignature(byte[] signature, boolean includeHeightPrefix, Repository repository) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -280,7 +281,7 @@ public class BlockArchiveReader {
}
public void invalidateFileListCache() {
this.fileListCache.clear();
this.fileListCache = null;
}
}

View File

@@ -633,6 +633,27 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public List<byte[]> getRewardSharePublicKeys() throws DataException {
String sql = "SELECT reward_share_public_key FROM RewardShares ORDER BY reward_share_public_key";
List<byte[]> rewardSharePublicKeys = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return null;
do {
byte[] rewardSharePublicKey = resultSet.getBytes(1);
rewardSharePublicKeys.add(rewardSharePublicKey);
} while (resultSet.next());
return rewardSharePublicKeys;
} catch (SQLException e) {
throw new DataException("Unable to fetch reward-share public keys from repository", e);
}
}
@Override
public boolean isRewardSharePublicKey(byte[] publicKey) throws DataException {
try {

View File

@@ -499,149 +499,4 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
}
}
// Peer file tracking
/**
* Fetch a list of peers that have reported to be holding chunks related to
* supplied transaction signature.
* @param signature
* @return a list of ArbitraryPeerData objects, or null if none found
* @throws DataException
*/
@Override
public List<ArbitraryPeerData> getArbitraryPeerDataForSignature(byte[] signature) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ?";
List<ArbitraryPeerData> arbitraryPeerData = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature)) {
if (resultSet == null)
return null;
do {
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData peerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
arbitraryPeerData.add(peerData);
} while (resultSet.next());
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndPeer(byte[] signature, String peerAddress) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ? AND peer_address = ?";
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature, peerAddress)) {
if (resultSet == null)
return null;
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndHost(byte[] signature, String host) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
// Create a host wildcard string which allows any port
String hostWildcard = String.format("%s:%%", host);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ? AND peer_address LIKE ?";
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature, hostWildcard)) {
if (resultSet == null)
return null;
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
@Override
public void save(ArbitraryPeerData arbitraryPeerData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("ArbitraryPeers");
saveHelper.bind("hash", arbitraryPeerData.getHash())
.bind("peer_address", arbitraryPeerData.getPeerAddress())
.bind("successes", arbitraryPeerData.getSuccesses())
.bind("failures", arbitraryPeerData.getFailures())
.bind("last_attempted", arbitraryPeerData.getLastAttempted())
.bind("last_retrieved", arbitraryPeerData.getLastRetrieved());
try {
saveHelper.execute(this.repository);
} catch (SQLException e) {
throw new DataException("Unable to save ArbitraryPeerData into repository", e);
}
}
@Override
public void delete(ArbitraryPeerData arbitraryPeerData) throws DataException {
try {
// Remove peer/hash combination
this.repository.delete("ArbitraryPeers", "hash = ? AND peer_address = ?",
arbitraryPeerData.getHash(), arbitraryPeerData.getPeerAddress());
} catch (SQLException e) {
throw new DataException("Unable to delete arbitrary peer data from repository", e);
}
}
@Override
public void deleteArbitraryPeersWithSignature(byte[] signature) throws DataException {
byte[] hash = Crypto.digest(signature);
try {
// Remove all records of peers hosting supplied signature
this.repository.delete("ArbitraryPeers", "hash = ?", hash);
} catch (SQLException e) {
throw new DataException("Unable to delete arbitrary peer data from repository", e);
}
}
}

View File

@@ -40,8 +40,8 @@ public class HSQLDBDatabaseArchiving {
return false;
}
LOGGER.info("Building block archive - this process could take a while... (approx. 15 mins on high spec)");
SplashFrame.getInstance().updateStatus("Building block archive (takes 60+ mins)...");
LOGGER.info("Building block archive - this process could take a while...");
SplashFrame.getInstance().updateStatus("Building block archive...");
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
int startHeight = 0;

View File

@@ -959,6 +959,11 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("CREATE INDEX SellNameNameIndex ON SellNameTransactions (name)");
break;
case 41:
// Drop the ArbitraryPeers table as it's no longer needed
stmt.execute("DROP TABLE ArbitraryPeers");
break;
default:
// nothing to do
return false;

View File

@@ -190,7 +190,7 @@ public class Settings {
/** Maximum number of peer connections we allow. */
private int maxPeers = 32;
/** Maximum number of threads for network engine. */
private int maxNetworkThreadPoolSize = 20;
private int maxNetworkThreadPoolSize = 32;
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
private int networkPoWComputePoolSize = 2;
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
@@ -245,7 +245,6 @@ public class Settings {
private String[] bootstrapHosts = new String[] {
"http://bootstrap.qortal.org",
"http://bootstrap2.qortal.org",
"http://81.169.136.59",
"http://62.171.190.193"
};

View File

@@ -14,9 +14,6 @@ import org.qortal.data.PaymentData;
import org.qortal.data.naming.NameData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Network;
import org.qortal.network.message.ArbitrarySignaturesMessage;
import org.qortal.network.message.Message;
import org.qortal.payment.Payment;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -222,15 +219,6 @@ public class ArbitraryTransaction extends Transaction {
if (arbitraryTransactionData.getName() != null) {
ArbitraryDataManager.getInstance().invalidateCache(arbitraryTransactionData);
}
// We also need to broadcast to the network that we are now hosting files for this transaction,
// but only if these files are in accordance with our storage policy
if (ArbitraryDataStorageManager.getInstance().canStoreData(arbitraryTransactionData)) {
// Use a null peer address to indicate our own
byte[] signature = arbitraryTransactionData.getSignature();
Message arbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, Arrays.asList(signature));
Network.getInstance().broadcast(broadcastPeer -> arbitrarySignatureMessage);
}
}
}

View File

@@ -13,6 +13,7 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.Account;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.ACCT;
import org.qortal.crosschain.SupportedBlockchain;
@@ -48,7 +49,7 @@ public class PresenceTransaction extends Transaction {
REWARD_SHARE(0) {
@Override
public long getLifetime() {
return Controller.ONLINE_TIMESTAMP_MODULUS;
return OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS;
}
},
TRADE_BOT(1) {
@@ -209,6 +210,9 @@ public class PresenceTransaction extends Transaction {
@Override
public boolean isSignatureValid() {
return false;
/*
byte[] signature = this.transactionData.getSignature();
if (signature == null)
return false;
@@ -231,6 +235,7 @@ public class PresenceTransaction extends Transaction {
// Check nonce
return MemoryPoW.verify2(transactionBytes, POW_BUFFER_SIZE, POW_DIFFICULTY, nonce);
*/
}
/**

View File

@@ -118,10 +118,13 @@ public class UpdateNameTransaction extends Transaction {
if (!owner.getAddress().equals(nameData.getOwner()))
return ValidationResult.INVALID_NAME_OWNER;
// Check new name isn't already taken, unless it is the same name (this allows for case-adjusting renames)
NameData newNameData = this.repository.getNameRepository().fromReducedName(this.updateNameTransactionData.getReducedNewName());
if (newNameData != null && !newNameData.getName().equals(nameData.getName()))
return ValidationResult.NAME_ALREADY_REGISTERED;
// Additional checks if transaction intends to change name
if (!this.updateNameTransactionData.getNewName().isEmpty()) {
// Check new name isn't already taken, unless it is the same name (this allows for case-adjusting renames)
NameData newNameData = this.repository.getNameRepository().fromReducedName(this.updateNameTransactionData.getReducedNewName());
if (newNameData != null && !newNameData.getName().equals(nameData.getName()))
return ValidationResult.NAME_ALREADY_REGISTERED;
}
return ValidationResult.OK;
}

View File

@@ -114,8 +114,10 @@ public abstract class ExecuteProduceConsume implements Runnable {
if (this.activeThreadCount > this.greatestActiveThreadCount)
this.greatestActiveThreadCount = this.activeThreadCount;
this.logger.trace(() -> String.format("[%d] started, hasThreadPending was: %b, activeThreadCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] started, hasThreadPending was: %b, activeThreadCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount));
}
// Defer clearing hasThreadPending to prevent unnecessary threads waiting to produce...
wasThreadPending = this.hasThreadPending;
@@ -128,7 +130,9 @@ public abstract class ExecuteProduceConsume implements Runnable {
while (!Thread.currentThread().isInterrupted()) {
Task task = null;
this.logger.trace(() -> String.format("[%d] waiting to produce...", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] waiting to produce...", Thread.currentThread().getId()));
}
synchronized (this) {
if (wasThreadPending) {
@@ -138,8 +142,10 @@ public abstract class ExecuteProduceConsume implements Runnable {
}
final boolean lambdaCanIdle = canBlock;
this.logger.trace(() -> String.format("[%d] producing, activeThreadCount: %d, consumerCount: %d, canBlock is %b...",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount, lambdaCanIdle));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] producing, activeThreadCount: %d, consumerCount: %d, canBlock is %b...",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount, lambdaCanIdle));
}
final long beforeProduce = isLoggerTraceEnabled ? System.currentTimeMillis() : 0;
@@ -152,18 +158,24 @@ public abstract class ExecuteProduceConsume implements Runnable {
this.logger.warn(() -> String.format("[%d] exception while trying to produce task", Thread.currentThread().getId()), e);
}
this.logger.trace(() -> String.format("[%d] producing took %dms", Thread.currentThread().getId(), System.currentTimeMillis() - beforeProduce));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] producing took %dms", Thread.currentThread().getId(), System.currentTimeMillis() - beforeProduce));
}
}
if (task == null)
synchronized (this) {
this.logger.trace(() -> String.format("[%d] no task, activeThreadCount: %d, consumerCount: %d",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] no task, activeThreadCount: %d, consumerCount: %d",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount));
}
if (this.activeThreadCount > this.consumerCount + 1) {
--this.activeThreadCount;
this.logger.trace(() -> String.format("[%d] ending, activeThreadCount now: %d",
Thread.currentThread().getId(), this.activeThreadCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] ending, activeThreadCount now: %d",
Thread.currentThread().getId(), this.activeThreadCount));
}
return;
}
@@ -180,12 +192,16 @@ public abstract class ExecuteProduceConsume implements Runnable {
++this.tasksProduced;
++this.consumerCount;
this.logger.trace(() -> String.format("[%d] hasThreadPending: %b, activeThreadCount: %d, consumerCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount, this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] hasThreadPending: %b, activeThreadCount: %d, consumerCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount, this.consumerCount));
}
// If we have no thread pending and no excess of threads then we should spawn a fresh thread
if (!this.hasThreadPending && this.activeThreadCount <= this.consumerCount + 1) {
this.logger.trace(() -> String.format("[%d] spawning another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] spawning another thread", Thread.currentThread().getId()));
}
this.hasThreadPending = true;
try {
@@ -193,15 +209,21 @@ public abstract class ExecuteProduceConsume implements Runnable {
} catch (RejectedExecutionException e) {
++this.spawnFailures;
this.hasThreadPending = false;
this.logger.trace(() -> String.format("[%d] failed to spawn another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] failed to spawn another thread", Thread.currentThread().getId()));
}
this.onSpawnFailure();
}
} else {
this.logger.trace(() -> String.format("[%d] NOT spawning another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] NOT spawning another thread", Thread.currentThread().getId()));
}
}
}
this.logger.trace(() -> String.format("[%d] performing task...", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] performing task...", Thread.currentThread().getId()));
}
try {
task.perform(); // This can block for a while
@@ -212,14 +234,18 @@ public abstract class ExecuteProduceConsume implements Runnable {
this.logger.warn(() -> String.format("[%d] exception while performing task", Thread.currentThread().getId()), e);
}
this.logger.trace(() -> String.format("[%d] finished task", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] finished task", Thread.currentThread().getId()));
}
synchronized (this) {
++this.tasksConsumed;
--this.consumerCount;
this.logger.trace(() -> String.format("[%d] consumerCount now: %d",
Thread.currentThread().getId(), this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] consumerCount now: %d",
Thread.currentThread().getId(), this.consumerCount));
}
// Quicker, non-blocking produce next round
canBlock = false;

View File

@@ -18,6 +18,9 @@ import java.util.TreeMap;
import com.google.common.base.CharMatcher;
import com.ibm.icu.text.CaseMap;
import com.ibm.icu.text.Normalizer2;
import com.ibm.icu.text.UnicodeSet;
import net.codebox.homoglyph.HomoglyphBuilder;
public abstract class Unicode {
@@ -31,6 +34,8 @@ public abstract class Unicode {
public static final String ZERO_WIDTH_NO_BREAK_SPACE = "\ufeff";
public static final CharMatcher ZERO_WIDTH_CHAR_MATCHER = CharMatcher.anyOf(ZERO_WIDTH_SPACE + ZERO_WIDTH_NON_JOINER + ZERO_WIDTH_JOINER + WORD_JOINER + ZERO_WIDTH_NO_BREAK_SPACE);
private static final UnicodeSet removableUniset = new UnicodeSet("[[:Mark:][:Other:]]").freeze();
private static int[] homoglyphCodePoints;
private static int[] reducedCodePoints;
@@ -59,7 +64,7 @@ public abstract class Unicode {
public static String normalize(String input) {
String output;
// Normalize
// Normalize using NFKC to recompose in canonical form
output = Normalizer.normalize(input, Form.NFKC);
// Remove zero-width code-points, used for rendering
@@ -91,8 +96,8 @@ public abstract class Unicode {
public static String sanitize(String input) {
String output;
// Normalize
output = Normalizer.normalize(input, Form.NFKD);
// Normalize using NFKD to decompose into individual combining code points
output = Normalizer2.getNFKDInstance().normalize(input);
// Remove zero-width code-points, used for rendering
output = removeZeroWidth(output);
@@ -100,11 +105,11 @@ public abstract class Unicode {
// Normalize whitespace
output = CharMatcher.whitespace().trimAndCollapseFrom(output, ' ');
// Remove accents, combining marks
output = output.replaceAll("[\\p{M}\\p{C}]", "");
// Remove accents, combining marks - see https://www.unicode.org/reports/tr44/#GC_Values_Table
output = removableUniset.stripFrom(output, true);
// Convert to lowercase
output = output.toLowerCase(Locale.ROOT);
output = CaseMap.toLower().apply(Locale.ROOT, output);
// Reduce homoglyphs
output = reduceHomoglyphs(output);

View File

@@ -128,6 +128,8 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
<canvas id="c"></canvas>
<!-- partial -->
<script>
var theme = "%%THEME%%";
var w = c.width = window.innerWidth,
h = c.height = window.innerHeight,
ctx = c.getContext( '2d' ),
@@ -165,7 +167,12 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
baseRad = Math.PI * 2 / 6;
ctx.fillStyle = 'white';
if (theme === "dark") {
ctx.fillStyle = 'black';
}
else {
ctx.fillStyle = 'white';
}
ctx.fillRect( 0, 0, w, h );
function loop() {
@@ -176,9 +183,17 @@ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
ctx.globalCompositeOperation = 'source-over';
ctx.shadowBlur = 0;
ctx.fillStyle = 'rgba(230,230,230,alp)'.replace( 'alp', opts.repaintAlpha );
ctx.fillRect( 0, 0, w, h );
ctx.globalCompositeOperation = 'darker';
if (theme === "dark") {
ctx.fillStyle = 'rgba(0,0,0,alp)'.replace('alp', opts.repaintAlpha);
ctx.fillRect( 0, 0, w, h );
ctx.globalCompositeOperation = 'lighter';
}
else {
ctx.fillStyle = 'rgba(230,230,230,alp)'.replace('alp', opts.repaintAlpha);
ctx.fillRect( 0, 0, w, h );
ctx.globalCompositeOperation = 'darker';
}
if( lines.length < opts.count && Math.random() < opts.spawnChance )
lines.push( new Line );

View File

@@ -17,6 +17,8 @@ import org.qortal.utils.NTP;
import java.io.IOException;
import java.io.PrintWriter;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.file.Files;
import java.nio.file.NoSuchFileException;
import java.nio.file.Path;
@@ -206,6 +208,37 @@ public class BootstrapTests extends Common {
assertEquals(uniqueHosts.size(), Arrays.asList(bootstrapHosts).size());
}
@Test
public void testBootstrapHosts() throws IOException {
String[] bootstrapHosts = Settings.getInstance().getBootstrapHosts();
String[] bootstrapTypes = { "archive", "toponly" };
for (String host : bootstrapHosts) {
for (String type : bootstrapTypes) {
String bootstrapFilename = String.format("bootstrap-%s.7z", type);
String bootstrapUrl = String.format("%s/%s", host, bootstrapFilename);
// Make a HEAD request to check the status of each bootstrap file
URL url = new URL(bootstrapUrl);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("HEAD");
connection.connect();
long fileSize = connection.getContentLengthLong();
long lastModified = connection.getLastModified();
connection.disconnect();
// Ensure the bootstrap exists and has a size greated than 100MiB
System.out.println(String.format("%s %s size is %d bytes", host, type, fileSize));
assertTrue("Bootstrap size must be at least 100MiB", fileSize > 100*1024*1024L);
// Ensure the bootstrap has been published recently (in the last 3 days)
long minimumLastMofifiedTimestamp = NTP.getTime() - (3 * 24 * 60 * 60 * 1000L);
System.out.println(String.format("%s %s last modified timestamp is %d", host, type, lastModified));
assertTrue("Bootstrap last modified date must be in the last 3 days", lastModified > minimumLastMofifiedTimestamp);
}
}
}
private void deleteBootstraps() throws IOException {
try {
Path archivePath = Paths.get(String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), "bootstrap-archive.7z"));

View File

@@ -1,133 +0,0 @@
package org.qortal.test;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.crosschain.BitcoinACCTv1;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.PresenceTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.PresenceTransaction;
import org.qortal.transaction.PresenceTransaction.PresenceType;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.NTP;
import com.google.common.primitives.Longs;
import static org.junit.Assert.*;
public class PresenceTests extends Common {
private static final byte[] BITCOIN_PKH = new byte[20];
private static final byte[] HASH_OF_SECRET_B = new byte[32];
private PrivateKeyAccount signer;
private Repository repository;
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
this.repository = RepositoryManager.getRepository();
this.signer = Common.getTestAccount(this.repository, "bob");
// We need to create corresponding test trade offer
byte[] creationBytes = BitcoinACCTv1.buildQortalAT(this.signer.getAddress(), BITCOIN_PKH, HASH_OF_SECRET_B,
0L, 0L,
7 * 24 * 60 * 60);
long txTimestamp = NTP.getTime();
byte[] lastReference = this.signer.getLastReference();
long fee = 0;
String name = "QORT-BTC cross-chain trade";
String description = "Qortal-Bitcoin cross-chain trade";
String atType = "ACCT";
String tags = "QORT-BTC ACCT";
BaseTransactionData baseTransactionData = new BaseTransactionData(txTimestamp, Group.NO_GROUP, lastReference, this.signer.getPublicKey(), fee, null);
TransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, atType, tags, creationBytes, 1L, Asset.QORT);
Transaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
TransactionUtils.signAndImportValid(this.repository, deployAtTransactionData, this.signer);
BlockUtils.mintBlock(this.repository);
}
@After
public void afterTest() throws DataException {
if (this.repository != null)
this.repository.close();
this.repository = null;
}
@Test
public void validityTests() throws DataException {
long timestamp = System.currentTimeMillis();
byte[] timestampBytes = Longs.toByteArray(timestamp);
byte[] timestampSignature = this.signer.sign(timestampBytes);
assertTrue(isValid(Group.NO_GROUP, this.signer, timestamp, timestampSignature));
PrivateKeyAccount nonTrader = Common.getTestAccount(repository, "alice");
assertFalse(isValid(Group.NO_GROUP, nonTrader, timestamp, timestampSignature));
}
@Test
public void newestOnlyTests() throws DataException {
long OLDER_TIMESTAMP = System.currentTimeMillis() - 2000L;
long NEWER_TIMESTAMP = OLDER_TIMESTAMP + 1000L;
PresenceTransaction older = buildPresenceTransaction(Group.NO_GROUP, this.signer, OLDER_TIMESTAMP, null);
older.computeNonce();
TransactionUtils.signAndImportValid(repository, older.getTransactionData(), this.signer);
assertTrue(this.repository.getTransactionRepository().exists(older.getTransactionData().getSignature()));
PresenceTransaction newer = buildPresenceTransaction(Group.NO_GROUP, this.signer, NEWER_TIMESTAMP, null);
newer.computeNonce();
TransactionUtils.signAndImportValid(repository, newer.getTransactionData(), this.signer);
assertTrue(this.repository.getTransactionRepository().exists(newer.getTransactionData().getSignature()));
assertFalse(this.repository.getTransactionRepository().exists(older.getTransactionData().getSignature()));
}
private boolean isValid(int txGroupId, PrivateKeyAccount signer, long timestamp, byte[] timestampSignature) throws DataException {
Transaction transaction = buildPresenceTransaction(txGroupId, signer, timestamp, timestampSignature);
return transaction.isValidUnconfirmed() == ValidationResult.OK;
}
private PresenceTransaction buildPresenceTransaction(int txGroupId, PrivateKeyAccount signer, long timestamp, byte[] timestampSignature) throws DataException {
int nonce = 0;
byte[] reference = signer.getLastReference();
byte[] creatorPublicKey = signer.getPublicKey();
long fee = 0L;
if (timestampSignature == null)
timestampSignature = this.signer.sign(Longs.toByteArray(timestamp));
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, null);
PresenceTransactionData transactionData = new PresenceTransactionData(baseTransactionData, nonce, PresenceType.TRADE_BOT, timestampSignature);
return new PresenceTransaction(this.repository, transactionData);
}
}

View File

@@ -35,4 +35,41 @@ public class UnicodeTests {
assertEquals("strings should match", Unicode.sanitize(input1), Unicode.sanitize(input2));
}
@Test
public void testEmojis() {
/*
* Emojis shouldn't reduce down to empty strings.
*
* 🥳 Face with Party Horn and Party Hat Emoji U+1F973
*/
String emojis = "\uD83E\uDD73";
assertFalse(Unicode.sanitize(emojis).isBlank());
}
@Test
public void testSanitize() {
/*
* Check various code points that should be stripped out when sanitizing / reducing
*/
String enclosingCombiningMark = "\u1100\u1161\u20DD"; // \u20DD is an enclosing combining mark and should be removed
String spacingMark = "\u0A39\u0A3f"; // \u0A3f is spacing combining mark and should be removed
String nonspacingMark = "c\u0302"; // \u0302 is a non-spacing combining mark and should be removed
assertNotSame(enclosingCombiningMark, Unicode.sanitize(enclosingCombiningMark));
assertNotSame(spacingMark, Unicode.sanitize(spacingMark));
assertNotSame(nonspacingMark, Unicode.sanitize(nonspacingMark));
String control = "\u001B\u009E"; // \u001B and \u009E are a control codes
String format = "\u202A\u2062"; // \u202A and \u2062 are zero-width formatting codes
String surrogate = "\uD800\uDFFF"; // surrogates
String privateUse = "\uE1E0"; // \uE000 - \uF8FF is private use area
String unassigned = "\uFAFA"; // \uFAFA is currently unassigned
assertTrue(Unicode.sanitize(control).isBlank());
assertTrue(Unicode.sanitize(format).isBlank());
assertTrue(Unicode.sanitize(surrogate).isBlank());
assertTrue(Unicode.sanitize(privateUse).isBlank());
assertTrue(Unicode.sanitize(unassigned).isBlank());
}
}

View File

@@ -1,155 +0,0 @@
package org.qortal.test.arbitrary;
import org.junit.Before;
import org.junit.Test;
import org.qortal.crypto.Crypto;
import org.qortal.data.network.ArbitraryPeerData;
import org.qortal.data.network.PeerData;
import org.qortal.network.Peer;
import org.qortal.network.PeerAddress;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.Common;
import org.qortal.utils.NTP;
import java.util.Random;
import static org.junit.Assert.*;
public class ArbitraryPeerTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
}
@Test
public void testSaveArbitraryPeerData() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
String peerAddress = "123.124.125.126:12392";
String host = peerAddress.split(":")[0];
// Create random bytes to represent a signature
byte[] signature = new byte[64];
new Random().nextBytes(signature);
// Make sure we don't have an entry for this hash/peer combination
assertNull(repository.getArbitraryRepository().getArbitraryPeerDataForSignatureAndHost(signature, host));
// Now add this mapping to the db
Peer peer = new Peer(new PeerData(PeerAddress.fromString(peerAddress)));
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peer);
assertTrue(arbitraryPeerData.isPeerAddressValid());
repository.getArbitraryRepository().save(arbitraryPeerData);
// We should now have an entry for this hash/peer combination
ArbitraryPeerData retrievedArbitraryPeerData = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, host);
assertNotNull(retrievedArbitraryPeerData);
// .. and its data should match what was saved
assertArrayEquals(Crypto.digest(signature), retrievedArbitraryPeerData.getHash());
assertEquals(peerAddress, retrievedArbitraryPeerData.getPeerAddress());
}
}
@Test
public void testUpdateArbitraryPeerData() throws DataException, InterruptedException {
try (final Repository repository = RepositoryManager.getRepository()) {
String peerAddress = "123.124.125.126:12392";
String host = peerAddress.split(":")[0];
// Create random bytes to represent a signature
byte[] signature = new byte[64];
new Random().nextBytes(signature);
// Make sure we don't have an entry for this hash/peer combination
assertNull(repository.getArbitraryRepository().getArbitraryPeerDataForSignatureAndHost(signature, host));
// Now add this mapping to the db
Peer peer = new Peer(new PeerData(PeerAddress.fromString(peerAddress)));
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peer);
assertTrue(arbitraryPeerData.isPeerAddressValid());
repository.getArbitraryRepository().save(arbitraryPeerData);
// We should now have an entry for this hash/peer combination
ArbitraryPeerData retrievedArbitraryPeerData = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, host);
assertNotNull(retrievedArbitraryPeerData);
// .. and its data should match what was saved
assertArrayEquals(Crypto.digest(signature), retrievedArbitraryPeerData.getHash());
assertEquals(peerAddress, retrievedArbitraryPeerData.getPeerAddress());
// All stats should be zero
assertEquals(Integer.valueOf(0), retrievedArbitraryPeerData.getSuccesses());
assertEquals(Integer.valueOf(0), retrievedArbitraryPeerData.getFailures());
assertEquals(Long.valueOf(0), retrievedArbitraryPeerData.getLastAttempted());
assertEquals(Long.valueOf(0), retrievedArbitraryPeerData.getLastRetrieved());
// Now modify some values and re-save
retrievedArbitraryPeerData.incrementSuccesses(); retrievedArbitraryPeerData.incrementSuccesses(); // Twice
retrievedArbitraryPeerData.incrementFailures(); // Once
retrievedArbitraryPeerData.markAsAttempted();
Thread.sleep(100);
retrievedArbitraryPeerData.markAsRetrieved();
assertTrue(arbitraryPeerData.isPeerAddressValid());
repository.getArbitraryRepository().save(retrievedArbitraryPeerData);
// Retrieve data once again
ArbitraryPeerData updatedArbitraryPeerData = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, host);
assertNotNull(updatedArbitraryPeerData);
// Check the values
assertArrayEquals(Crypto.digest(signature), updatedArbitraryPeerData.getHash());
assertEquals(peerAddress, updatedArbitraryPeerData.getPeerAddress());
assertEquals(Integer.valueOf(2), updatedArbitraryPeerData.getSuccesses());
assertEquals(Integer.valueOf(1), updatedArbitraryPeerData.getFailures());
assertTrue(updatedArbitraryPeerData.getLastRetrieved().longValue() > 0L);
assertTrue(updatedArbitraryPeerData.getLastAttempted().longValue() > 0L);
assertTrue(updatedArbitraryPeerData.getLastRetrieved() > updatedArbitraryPeerData.getLastAttempted());
assertTrue(NTP.getTime() - updatedArbitraryPeerData.getLastRetrieved() < 1000);
assertTrue(NTP.getTime() - updatedArbitraryPeerData.getLastAttempted() < 1000);
}
}
@Test
public void testDuplicatePeerHost() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
String peerAddress1 = "123.124.125.126:12392";
String peerAddress2 = "123.124.125.126:62392";
String host1 = peerAddress1.split(":")[0];
String host2 = peerAddress2.split(":")[0];
// Create random bytes to represent a signature
byte[] signature = new byte[64];
new Random().nextBytes(signature);
// Make sure we don't have an entry for these hash/peer combinations
assertNull(repository.getArbitraryRepository().getArbitraryPeerDataForSignatureAndHost(signature, host1));
assertNull(repository.getArbitraryRepository().getArbitraryPeerDataForSignatureAndHost(signature, host2));
// Now add this mapping to the db
Peer peer = new Peer(new PeerData(PeerAddress.fromString(peerAddress1)));
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peer);
assertTrue(arbitraryPeerData.isPeerAddressValid());
repository.getArbitraryRepository().save(arbitraryPeerData);
// We should now have an entry for this hash/peer combination
ArbitraryPeerData retrievedArbitraryPeerData = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, host1);
assertNotNull(retrievedArbitraryPeerData);
// And we should also have an entry for the similar peerAddress string with a matching host
ArbitraryPeerData retrievedArbitraryPeerData2 = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, host2);
assertNotNull(retrievedArbitraryPeerData2);
}
}
}

View File

@@ -8,7 +8,7 @@ import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.data.account.RewardShareData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -77,7 +77,7 @@ public class BlocksMintedCountTests extends Common {
assertNotNull(testRewardShareData);
// Create signed timestamps
Controller.getInstance().ensureTestingAccountsOnline(mintingAccount, testRewardShareAccount);
OnlineAccountsManager.getInstance().ensureTestingAccountsOnline(mintingAccount, testRewardShareAccount);
// Even though Alice features in two online reward-shares, she should only gain +1 blocksMinted
// Bob only features in one online reward-share, so should also only gain +1 blocksMinted
@@ -87,7 +87,7 @@ public class BlocksMintedCountTests extends Common {
private void testRewardShare(Repository repository, PrivateKeyAccount testRewardShareAccount, int aliceDelta, int bobDelta) throws DataException {
// Create signed timestamps
Controller.getInstance().ensureTestingAccountsOnline(testRewardShareAccount);
OnlineAccountsManager.getInstance().ensureTestingAccountsOnline(testRewardShareAccount);
testRewardShareRetainingTimestamps(repository, testRewardShareAccount, aliceDelta, bobDelta);
}

View File

@@ -11,7 +11,7 @@ import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
@@ -73,7 +73,7 @@ public class DisagreementTests extends Common {
assertNotNull(testRewardShareData);
// Create signed timestamps
Controller.getInstance().ensureTestingAccountsOnline(mintingAccount, testRewardShareAccount);
OnlineAccountsManager.getInstance().ensureTestingAccountsOnline(mintingAccount, testRewardShareAccount);
// Mint another block
BlockMinter.mintTestingBlockRetainingTimestamps(repository, mintingAccount);

View File

@@ -1,20 +1,26 @@
package org.qortal.test.naming;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.data.naming.NameData;
import org.qortal.data.transaction.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
import org.qortal.test.common.transaction.TestTransaction;
import org.qortal.transaction.RegisterNameTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.utils.NTP;
import org.qortal.utils.Unicode;
import java.io.File;
import java.util.List;
import static org.junit.Assert.*;
@@ -50,31 +56,83 @@ public class IntegrityTests extends Common {
}
}
// Test integrity check after renaming to something else and then back again
// This was originally confusing the rebuildName() code and creating a loop
@Test
public void testBlankReducedName() throws DataException {
public void testUpdateNameLoop() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Register-name
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
String name = "\uD83E\uDD73"; // Translates to a reducedName of ""
String data = "\uD83E\uDD73";
String initialName = "initial-name";
String initialData = "{\"age\":30}";
String initialReducedName = "initia1-name";
RegisterNameTransactionData transactionData = new RegisterNameTransactionData(TestTransaction.generateBase(alice), name, data);
transactionData.setFee(new RegisterNameTransaction(null, null).getUnitFee(transactionData.getTimestamp()));
TransactionUtils.signAndMint(repository, transactionData, alice);
TransactionData initialTransactionData = new RegisterNameTransactionData(TestTransaction.generateBase(alice), initialName, initialData);
initialTransactionData.setFee(new RegisterNameTransaction(null, null).getUnitFee(initialTransactionData.getTimestamp()));
TransactionUtils.signAndMint(repository, initialTransactionData, alice);
// Ensure the name exists and the data is correct
assertEquals(data, repository.getNameRepository().fromName(name).getData());
// Check initial name exists
assertTrue(repository.getNameRepository().nameExists(initialName));
assertNotNull(repository.getNameRepository().fromReducedName(initialReducedName));
// Ensure the reducedName is blank
assertEquals("", repository.getNameRepository().fromName(name).getReducedName());
// Update the name to something new
String newName = "new-name";
String newData = "";
String newReducedName = "new-name";
TransactionData updateTransactionData = new UpdateNameTransactionData(TestTransaction.generateBase(alice), initialName, newName, newData);
TransactionUtils.signAndMint(repository, updateTransactionData, alice);
// Run the database integrity check for this name
// Check old name no longer exists
assertFalse(repository.getNameRepository().nameExists(initialName));
assertNull(repository.getNameRepository().fromReducedName(initialReducedName));
// Check new name exists
assertTrue(repository.getNameRepository().nameExists(newName));
assertNotNull(repository.getNameRepository().fromReducedName(newReducedName));
// Check updated timestamp is correct
assertEquals((Long) updateTransactionData.getTimestamp(), repository.getNameRepository().fromName(newName).getUpdated());
// Update the name to another new name
String newName2 = "new-name-2";
String newData2 = "";
String newReducedName2 = "new-name-2";
TransactionData updateTransactionData2 = new UpdateNameTransactionData(TestTransaction.generateBase(alice), newName, newName2, newData2);
TransactionUtils.signAndMint(repository, updateTransactionData2, alice);
// Check old name no longer exists
assertFalse(repository.getNameRepository().nameExists(newName));
assertNull(repository.getNameRepository().fromReducedName(newReducedName));
// Check new name exists
assertTrue(repository.getNameRepository().nameExists(newName2));
assertNotNull(repository.getNameRepository().fromReducedName(newReducedName2));
// Check updated timestamp is correct
assertEquals((Long) updateTransactionData2.getTimestamp(), repository.getNameRepository().fromName(newName2).getUpdated());
// Update the name back to the initial name
TransactionData updateTransactionData3 = new UpdateNameTransactionData(TestTransaction.generateBase(alice), newName2, initialName, initialData);
TransactionUtils.signAndMint(repository, updateTransactionData3, alice);
// Check previous name no longer exists
assertFalse(repository.getNameRepository().nameExists(newName2));
assertNull(repository.getNameRepository().fromReducedName(newReducedName2));
// Check initial name exists again
assertTrue(repository.getNameRepository().nameExists(initialName));
assertNotNull(repository.getNameRepository().fromReducedName(initialReducedName));
// Check updated timestamp is correct
assertEquals((Long) updateTransactionData3.getTimestamp(), repository.getNameRepository().fromName(initialName).getUpdated());
// Run the database integrity check for the initial name, to ensure it doesn't get into a loop
NamesDatabaseIntegrityCheck integrityCheck = new NamesDatabaseIntegrityCheck();
assertEquals(1, integrityCheck.rebuildName(name, repository));
assertEquals(2, integrityCheck.rebuildName(initialName, repository));
// Ensure the name still exists and the data is still correct
assertEquals(data, repository.getNameRepository().fromName(name).getData());
assertEquals("", repository.getNameRepository().fromName(name).getReducedName());
// Ensure the new name still exists and the data is still correct
assertTrue(repository.getNameRepository().nameExists(initialName));
assertEquals(initialData, repository.getNameRepository().fromName(initialName).getData());
}
}
@@ -448,4 +506,46 @@ public class IntegrityTests extends Common {
}
}
@Ignore("Checks 'live' repository")
@Test
public void testRepository() throws DataException {
Settings.fileInstance("settings.json"); // use 'live' settings
String repositoryUrlTemplate = "jdbc:hsqldb:file:%s" + File.separator + "blockchain;create=false;hsqldb.full_log_replay=true";
String connectionUrl = String.format(repositoryUrlTemplate, Settings.getInstance().getRepositoryPath());
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(connectionUrl);
RepositoryManager.setRepositoryFactory(repositoryFactory);
try (final Repository repository = RepositoryManager.getRepository()) {
List<NameData> names = repository.getNameRepository().getAllNames();
for (NameData nameData : names) {
String reReduced = Unicode.sanitize(nameData.getName());
if (reReduced.isBlank()) {
System.err.println(String.format("Name '%s' reduced to blank",
nameData.getName()
));
}
if (!nameData.getReducedName().equals(reReduced)) {
System.out.println(String.format("Name '%s' reduced form was '%s' but is now '%s'",
nameData.getName(),
nameData.getReducedName(),
reReduced
));
// ...but does another name already have this reduced form?
names.stream()
.filter(tmpNameData -> tmpNameData.getReducedName().equals(reReduced))
.forEach(tmpNameData ->
System.err.println(String.format("Name '%s' new reduced form also matches name '%s'",
nameData.getName(),
tmpNameData.getName()
))
);
}
}
}
}
}

View File

@@ -95,7 +95,6 @@ cat <<__JAR__
### [${project}.jar](${git_url}/releases/download/${git_tag}/${project}.jar)
If built using OpenJDK 11:
__JAR__
3hash target/${project}*.jar