Compare commits

...

187 Commits

Author SHA1 Message Date
CalDescent
6bb9227159 Removed RavencoinACCTv1
Also removed CrossChainRavencoinACCTv1Resource - same as Digibyte.
2022-05-01 16:17:00 +01:00
QuickMythril
0f52ccb433 add Ravencoin ACCTs 2022-04-26 13:51:19 -04:00
CalDescent
f9972f50e0 Updated altcoinj 2022-04-24 15:08:43 +01:00
QuickMythril
05d9a7e820 Switched to Qortal fork of altcoinj, using RavencoinMainNetParams 2022-04-23 08:28:12 -04:00
QuickMythril
390b359761 add RVN wallet 2022-04-21 11:38:49 -04:00
CalDescent
311f41c610 Attempt to fix core startup problems on some systems (GNOME Desktop?) by adding defensiveness to GUI elements. 2022-04-20 08:41:37 +01:00
CalDescent
0a156c76a2 Fix for NPE observed on the EPC-fixes branch (but putting the fix on master in case unrelated) 2022-04-20 08:38:59 +01:00
CalDescent
337b03aa68 Catch java.util.ServiceConfigurationError in Gui.loadImage() 2022-04-17 17:59:29 +01:00
CalDescent
3d99f86630 Improved logging 2022-04-16 20:50:00 +01:00
CalDescent
8d1a58ec06 POW_DIFFICULTY_NO_QORT reduced from 14 to 12 (around 4x faster) 2022-04-16 12:36:32 +01:00
CalDescent
2e5a7cb5a1 Adapted Blockchain.java to use lookup table for name registration fees, to more easily support fee adjustments.
This is currently for name registration transactions only, but can be adapted (or duplicated) for other transaction types when needed.

Note: this switches from a greater-than (>) to a greater-than-or-equal (>=) timestamp comparison, as it makes more sense this way. It shouldn't affect the previous transition since there were no REGISTER_NAME transactions at that exact timestamp.
2022-04-16 12:20:03 +01:00
CalDescent
895f02f178 Remove peers with unknown height, lower height or same height and same block signature (unless we don't have their block signature)
Adapted from code originally written by catbref from before genesis, and essentially prevents syncing backwards. This needs significant testing on testnet.
2022-04-16 11:30:07 +01:00
CalDescent
c59869982b Fix for system-wide QDN issues occuring when the metadata file has an empty chunks array.
It is quite likely that existing resources with both metadata and an empty chunks array will need to be republished, because this bug may have led to incorrect file deletions.
2022-04-16 11:25:44 +01:00
CalDescent
3b3368f950 Merge pull request #85 from QuickMythril/member-count
Add member count to each group returned by GET /member/{address}
2022-04-16 11:00:35 +01:00
QuickMythril
3f02c760c2 Add member count to each group returned by GET /member/{address} 2022-04-15 06:23:10 -04:00
CalDescent
fee603e500 Add member count to each group returned by GET /groups (expanded on code written by QuickMythril) 2022-04-15 10:19:43 +01:00
QuickMythril
ad31d8014d get memberCount with Group Data
works for lookup by groupId
2022-04-14 22:08:52 -04:00
CalDescent
58a0ac74d2 Merge pull request #84 from catbref/ByteArray
Improvements to ByteArray to leverage Java 11 'native' Arrays methods
2022-04-14 21:30:59 +01:00
QuickMythril
8388aa9c23 update Russian translation
credit: Alexander45 & malina
2022-04-10 15:50:29 -04:00
catbref
c1894d8c00 Improvements to ByteArray to leverage Java 11 'native' Arrays.hashCode and Arrays.compareUnsigned for speed.
Also modified ambiguous ByteArray::new and ByteArray::of to ByteArray::wrap and ByteArray::copyOf.
Modifications to other classes that use ByteArray.
2022-04-10 16:38:02 +01:00
QuickMythril
f7f9cdc518 Merge pull request #83 from aldum/feature/hungarian_translation
fixup grammar; add missing translations
2022-04-09 00:10:37 -04:00
QuickMythril
850d7f8220 add/update translations
credit: johnnyfg (sv), schizo (it), IsBe (nl), Eduardo9999 (es)
2022-04-08 23:57:54 -04:00
aldum
051043283c fixup grammar; add missing translations 2022-04-06 23:21:49 +02:00
QuickMythril
15bc69de01 Merge pull request #82 from JaymenChou/patch-8
Update SysTray_zh_CN.properties
2022-04-05 13:38:13 -04:00
QuickMythril
ee3cfa4d6d fix typo 2022-04-05 13:26:02 -04:00
QuickMythril
df1f3079a5 Merge pull request #81 from JaymenChou/patch-7
Update SysTray_zh_TW.properties
2022-04-05 13:25:06 -04:00
QuickMythril
d9ae8a5552 Merge branch 'master' into patch-8 2022-04-05 13:23:19 -04:00
QuickMythril
2326c31ee7 Merge branch 'master' into patch-7 2022-04-05 13:11:14 -04:00
QuickMythril
91cb0f30dd Updated TransactionValidity translations
added some missing entries, and sorted alphabetically.
2022-04-05 12:51:49 -04:00
QuickMythril
c0307c352c Updated ApiError translations
removed some duplicate entries, and standardized the order
2022-04-05 11:46:32 -04:00
QuickMythril
8fd7c1b313 formatting fix 2022-04-05 11:09:30 -04:00
QuickMythril
b8147659b1 Updated SysTray translations
added some missing entries, and sorted alphabetically.
2022-04-05 10:48:43 -04:00
JaymenChou
7a1bac682f Update SysTray_zh_TW.properties
Add the missing term "PERFORMING_DB_MAINTENANCE" and translate it to Traditional Chinese
2022-04-04 20:36:48 +08:00
JaymenChou
9fdb7c977f Update SysTray_zh_CN.properties
Translate remaining terms to Simple Chinese
2022-04-04 20:33:59 +08:00
JaymenChou
4f3948323b Update SysTray_zh_TW.properties
Translate the remaining terms to Traditional Chinese
2022-04-04 20:31:19 +08:00
QuickMythril
70fcc1f712 Merge pull request #78 from JaymenChou/patch-4
Create ApiError_zh_CN.properties
2022-04-04 02:49:00 -04:00
JaymenChou
f20fe9199f Update ApiError_zh_CN.properties 2022-04-04 14:36:55 +08:00
QuickMythril
91dee4a3b8 Merge pull request #80 from JaymenChou/patch-6
Create TransactionValidity_zh_CN.properties
2022-04-04 02:17:35 -04:00
QuickMythril
0b89b8084e Merge pull request #79 from JaymenChou/patch-5
Create TransactionValidity_zh_TW.properties
2022-04-04 02:17:24 -04:00
QuickMythril
a5a80302b2 Merge pull request #77 from JaymenChou/patch-3
Create ApiError_zh_TW.properties
2022-04-04 02:17:02 -04:00
QuickMythril
e61a24ee7b removed electrum-ltc.bysh.me
this server often gives a false positive for phishing by some antivirus software.
2022-04-03 22:32:57 -04:00
JaymenChou
55ed342b59 Create TransactionValidity_zh_CN.properties
Add Simple Chinese For better understanding of logs
2022-04-03 13:27:52 +08:00
JaymenChou
3c6f79eec0 Create TransactionValidity_zh_TW.properties
Add Traditional Chinese For TransactionValidity Logs.
2022-04-03 13:25:32 +08:00
JaymenChou
590800ac1d Create ApiError_zh_CN.properties
Add Simple Chinese Support For API Error Message 
Hope it helps in understanding the API !
2022-04-03 12:43:18 +08:00
JaymenChou
95c412b946 Create ApiError_zh_TW.properties
Add Traditional Chinese support to API Responses
2022-04-03 12:40:27 +08:00
CalDescent
a232395750 Merge branch 'master' of github.com:Qortal/qortal 2022-04-01 11:24:56 +01:00
QuickMythril
6edbc8b6a5 add decimal precision to download progress 2022-03-31 13:46:40 -04:00
QuickMythril
f8ffeed302 updated BTC electrumx servers
added new and removed TCP, closed servers, and versions older than 1.16.0
2022-03-31 11:32:55 -04:00
QuickMythril
e2ee68427c removed TCP electrumx servers 2022-03-31 11:29:54 -04:00
QuickMythril
74ff23239d removed TCP electrumx servers 2022-03-31 11:27:56 -04:00
QuickMythril
f1fa2ba2f6 added SSL electrumx servers 2022-03-31 10:02:31 -04:00
QuickMythril
e1522cec94 updated LTC electrumx servers 2022-03-31 09:58:53 -04:00
QuickMythril
8841b3cbb1 add spanish translations 2022-03-31 08:44:33 -04:00
CalDescent
94260bd93f Decreased the number of retries for missing metadata, to reduce broadcast spam. 2022-03-30 08:23:22 +01:00
CalDescent
15ff8af7ac Don't process trade bots or broadcast presence timestamps if our chain is more than 30 minutes old 2022-03-30 08:11:02 +01:00
CalDescent
d420033b36 Revert "Revert "Add Qortal AT FunctionCodes for getting account level / blocks minted + tests""
This reverts commit 59025b8f47.
2022-03-30 08:07:07 +01:00
CalDescent
bda63f0310 Removed hardcoded "qortal-backup/TradeBotStates.json" from POST /admin/repository/data API, as it's no longer needed now that API keys are required. 2022-03-30 08:06:09 +01:00
QuickMythril
54add26ccb fixed typo 2022-03-25 23:39:41 -04:00
CalDescent
089b068362 Updated AdvancedInstaller project for v3.2.3 2022-03-19 22:38:58 +00:00
CalDescent
fe474b4507 Bump version to 3.2.3 2022-03-19 20:44:41 +00:00
CalDescent
bbe15b563c Added unit test to simulate recent issue.
This fails with the 3.2.2 code but now passes when using the latest fixes.
2022-03-19 20:41:38 +00:00
CalDescent
59025b8f47 Revert "Add Qortal AT FunctionCodes for getting account level / blocks minted + tests"
This reverts commit eb9b94b9c6.
2022-03-19 19:52:14 +00:00
CalDescent
1b42c5edb1 Fixed NPE in runIntegrityCheck()
This feature is disabled by default so can be tidied up later. For now, the unhandled scenario is logged and the checking continues on.

One name's transactions are too complex for the current integrity check code to verify (MangoSalsa), but it has been verified manually. All other names pass the automated test.
2022-03-19 19:22:16 +00:00
CalDescent
362335913d Fixed infinite loop in name rebuilding.
If an account is renamed and then at some point renamed back to one of the original names, it confused the names rebuilding code. The current solution is to track the linked names that have already been rebuilt, and then break out of the loop once a name is encountered a second time.
2022-03-19 18:55:19 +00:00
CalDescent
4340dac595 Fixed recently introduced issue in name rebuilding code causing transactions to be unordered.
This is the likely cause of inconsistent name entries across different nodes, as we can't guarantee that every environment will return the same transaction order from the SQL queries.
2022-03-19 18:44:16 +00:00
CalDescent
f3e1fc884c Merge pull request #63 from catbref/master
Add Qortal AT FunctionCodes for getting account level / blocks minted
2022-03-19 11:32:39 +00:00
CalDescent
39c06d8817 Merge pull request #75 from catbref/name-unicode
Unicode / NAME updates.
2022-03-19 11:32:22 +00:00
CalDescent
91cee36c21 Catch and log all exceptions in addStatusToResources()
Some users are seeing 500 errors deriving from this code. This should hopefully allow more info to be obtained, as well as causing it to omit the status for resources that encounter problems.
2022-03-19 11:08:42 +00:00
CalDescent
6bef883942 Removed OpenJDK 11 reference in build-release.sh, as it seems that checksums will not match by default due to timestamps and file orderings.
See: https://dzone.com/articles/reproducible-builds-in-java
2022-03-19 11:05:51 +00:00
CalDescent
25ba2406c0 Updated AdvancedInstaller project for v3.2.2 2022-03-16 19:53:22 +00:00
CalDescent
e4dc8f85a7 Bump version to 3.2.2 2022-03-15 19:57:02 +00:00
CalDescent
12a4a260c8 Handle new sync result case. 2022-03-14 22:04:11 +00:00
CalDescent
268f02b5c3 Added automated test to ensure that the core's default bootstrap hosts are functional.
Whilst not strictly a unit test, this should allow issues with the core's bootstrap servers to be caught quickly.
2022-03-14 21:52:54 +00:00
CalDescent
13eff43b87 Fixed synchronizer issues which caused large re-orgs
Peers without a recent block are removed at the start of the sync process, however, due to the time lag involved in fetching block summaries and comparing the list of peers, some of these could subsequently drop back to a non-recent block and still be chosen as the next peer to sync with. The end result being that nodes could unnecessarily orphan as many as 20 blocks due to syncing with a peer that doesn't have a recent block (but has a couple of high weight blocks after the common block).

This commit adds some additional filtering to avoid this situation.

1) Peers without a recent block are removed as candidates in comparePeers(), allowing for alternate peers to be chosen.
2) After comparePeers() completes, the list is filtered a second time to make sure that all are still recent.
3) Finally, the peer's state is checked one last time in syncToPeerChain(), just before any orphaning takes place.

Whilst just one of the above would probably have been sufficient, the consequences of this bug are so severe that it makes sense to be very thorough.

The only exception to the above is when the node is in "recovery mode", in which case peers without recent blocks are allowed to be included. Items 1 and 3 above do not apply in recovery mode. Item 2 does apply, since the entire comparePeers() functionality is already skipped in a recovery situation due to our chain being out of date.
2022-03-14 21:47:37 +00:00
catbref
e604a19bce Unicode / NAME updates.
Fix UPDATE_NAME not processing empty 'newName' transactions correctly.
Fix some emoji code-points not being processed correctly.
Updated tests.
Now included ICU4J v70.1 - WARNING: this could add around 10MB to JAR size!
Bumped homoglyph to v1.2.1.
2022-03-14 08:45:32 +00:00
CalDescent
e63e39fe9a Updated AdvancedInstaller project for v3.2.1 2022-03-13 19:39:58 +00:00
CalDescent
584c951824 Bump version to 3.2.1 2022-03-13 18:53:54 +00:00
CalDescent
f0d9982ee4 Made arbitraryDataFileHashResponses final, and use .sort() rather than .stream() to avoid new instance creation. 2022-03-12 09:43:56 +00:00
CalDescent
c65de74d13 Revert "Synchronize arbitrary data list removals, as it seems that SynchronizedList and SynchronizedMap aren't overriding removeIf() with a thread-safe version."
This reverts commit e5f88fe2f4.
2022-03-12 09:40:13 +00:00
CalDescent
df0a9701ba Improved logging in onNetworkGetArbitraryDataFileListMessage() 2022-03-11 16:51:19 +00:00
CalDescent
4ec7b1ff1e Removed time estimations that are no longer correct or relevant. 2022-03-11 16:50:34 +00:00
CalDescent
7d3a465386 Including the number of hashes (even if zero) is now required in GetArbitraryDataFileListMessage, to allow for additional fields. Enough peers should have updated by now for this to be ok. 2022-03-11 16:50:11 +00:00
CalDescent
30347900d9 Tidied up one last place that was accessing immutableConnectedPeers directly. This makes no difference, but helps with code consistency. 2022-03-11 15:28:54 +00:00
CalDescent
e5f88fe2f4 Synchronize arbitrary data list removals, as it seems that SynchronizedList and SynchronizedMap aren't overriding removeIf() with a thread-safe version. 2022-03-11 15:22:34 +00:00
CalDescent
0d0ccfd0ac Small refactor for code simplicity. 2022-03-11 15:11:07 +00:00
CalDescent
9013d11d24 Report as 100% synced if the latest block is within the last 30 mins
This should hopefully reduce confusion due to APIs reporting 99% synced even though up to date. The systray should never show this since it already treats blocks in the last 30 mins as synced.
2022-03-11 14:53:10 +00:00
CalDescent
fc5672a161 Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading systray statuses.
Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
2022-03-11 14:49:02 +00:00
CalDescent
221c3629e4 Don't refetch the file list if the fileListCache is empty, since an empty list now means that there are likely to be no files available on disk. 2022-03-11 13:08:37 +00:00
CalDescent
76fc56f1c9 Fetch the file list in getFilenameForHeight() if needed. 2022-03-11 13:07:16 +00:00
CalDescent
8e59aa2885 Peer getter methods renamed to include "immutable", for consistency with underlying lists and also to make it clearer to the callers. 2022-03-11 13:00:47 +00:00
CalDescent
0738dbd613 Avoid direct access to this.connectedPeers, as we need to use the immutable copy. 2022-03-11 12:58:11 +00:00
CalDescent
196ecffaf3 Skip calls to this.logger.trace() in ExecuteProduceConsume.run() if trace logging isn't enabled.
This could very slightly reduce load due to skipping the internal filtering inside log4j. Given that this method is causing major problems with CPU at times, I'm trying to make it as optimized as possible.
2022-03-11 11:59:18 +00:00
CalDescent
a0fedbd4b0 Implemented suggestions from catbref to avoid potential thread safety issue in peer arrays. 2022-03-11 11:27:13 +00:00
CalDescent
7c47e22000 Set fileListCache to null when invalidating. 2022-03-11 11:01:29 +00:00
CalDescent
6aad6a1618 fileListCache is now an immutable Map, which is thread safe. Thanks to catbref for this idea. 2022-03-11 10:59:07 +00:00
CalDescent
b764172500 Revert "Hopeful fix for ConcurrentModificationException in BlockArchiveReader.getFilenameForHeight()"
This reverts commit a12ae8ad24.
2022-03-11 10:55:22 +00:00
CalDescent
c185d79672 Loop through all available direct peer connections and try each one in turn.
Also added some extra conditionals to avoid repeated attempts with the same port.
2022-03-09 20:55:27 +00:00
CalDescent
76b8ba91dd Only add an entry to directConnectionInfo if one with this peer-signature combination doesn't already exist. 2022-03-09 20:50:03 +00:00
CalDescent
0418c831e6 Direct connections with peers now prefer those with the highest number of chunks for a resource. Once a connection has been attempted with a peer, remove it from the list so that it isn't attempted again in the same round. 2022-03-09 20:15:26 +00:00
CalDescent
4078f94caa Modified GetArbitraryDataFileListMessage to allow requesting peer's address to be optionally included.
This can ultimately be used to notify the serving peer to expect a direct connection from the requesting peer (to allow it to temporarily bypass maxConnections for long enough for the files to be retrieved). Or it could even possibly be used to trigger a reverse connection (from the serving peer to the requesting peer).
2022-03-09 19:58:02 +00:00
CalDescent
a12ae8ad24 Hopeful fix for ConcurrentModificationException in BlockArchiveReader.getFilenameForHeight() 2022-03-09 19:46:50 +00:00
CalDescent
498ca29aab Wait until a successful connection with a peer before tracking the direct request. 2022-03-08 23:07:08 +00:00
CalDescent
ba70e457b6 Default chunk size reduced from 1MB to 0.5MB 2022-03-08 22:44:43 +00:00
CalDescent
d62808fe1d Don't attempt to create the data directory every time an ArbitraryDataFile instance is instantiated. This was using excessive amounts of CPU and disk I/O. 2022-03-08 22:42:07 +00:00
CalDescent
6c14b79dfb Removed bootstrap host that is no longer functional. 2022-03-08 22:30:01 +00:00
CalDescent
631a253bcc Added support for dark theme in loading screen. 2022-03-08 22:29:37 +00:00
CalDescent
4cb63100d3 Drop the ArbitraryPeers table as it's no longer needed 2022-03-06 13:01:09 +00:00
CalDescent
42fcee0cfd Removed all code that interfaced with the ArbitraryPeers table 2022-03-06 13:00:11 +00:00
CalDescent
829a2e937b Removed all arbitrary signature broadcasts 2022-03-06 12:58:01 +00:00
CalDescent
5d7e5e8e59 Dropped support of ARBITRARY_SIGNATURES message handling, as this feature has been superseded by the peerAddress in file list requests. 2022-03-06 12:46:06 +00:00
CalDescent
6f0a0ef324 Small refactor 2022-03-06 12:42:19 +00:00
CalDescent
f7fe91abeb sendOurOnlineAccountsInfo() moved to its own thread, in preparation for mempow 2022-03-06 12:41:54 +00:00
CalDescent
7252e8d160 Deleted presence tests, as they are no longer relevant, and aren't easily adaptable to the new approach. 2022-03-06 12:03:18 +00:00
CalDescent
2630c35f8c Chunk validation now uses MAX_CHUNK_SIZE rather than CHUNK_SIZE, to allow for a smaller CHUNK_SIZE value to be optionally used, without failing the validation of existing resources. 2022-03-06 11:43:28 +00:00
CalDescent
49f466c073 Added missing break; 2022-03-06 11:21:55 +00:00
CalDescent
c198f785e6 Added significant CPU optimizations to ArbitraryDataManager
- Slow down loops that query the db
- Check for new metadata every 5 minutes instead of constantly
- Check for new data every 1 minute instead of constantly

This could be further improved in the future by having block.process() notify the ArbitraryDataManager that there is new data to process. This would avoid the need for the frequent checks/loops, and only a single complete sweep would be needed on node startup (as long as failures are then retried). But I will avoid this additional complexity for now.
2022-03-06 11:21:39 +00:00
CalDescent
5be093dafc Fix for "Synchronizing null%" systray bug introduced in 3.2.0 2022-03-06 11:00:53 +00:00
CalDescent
2c33d5256c Added code accidentally missed out of commit 1b036b7 2022-03-05 20:44:01 +00:00
CalDescent
4448e2b5df Handle case when metadata isn't returned. 2022-03-05 17:39:13 +00:00
CalDescent
146d234dec Additional defensiveness in ArbitraryDataFile.fromHash() to avoid similar future bugs. 2022-03-05 17:25:48 +00:00
CalDescent
18d5c924e6 Fixed bug cased by fetchAllMetadata() 2022-03-05 17:25:14 +00:00
CalDescent
b520838195 Increased default maxNetworkThreadPoolSize from 20 to 32
This will hopefully offset some of the additional network demands from arbitrary data requests.
2022-03-05 17:24:55 +00:00
CalDescent
1b036b763c Major CPU optimization to block minter
Load sorted list of reward share public keys into memory, so that the indexes can be obtained. This is around 100x faster than querying each index separately (and the savings will increase as more keys are added).

For 4150 reward share keys, it was taking around 5000ms to query individually, vs 50ms using this approach.

The main trade off is that these 4150 keys require around 130kB of additional memory when minting (and this will increase proportionally with more minters). However, this one query was often accounting for 50% of the entire core's CPU usage, so the additional memory usage seems insignificant by comparison.

To gain confidence, I ran both old and new approaches side by side, and confirmed that the indexes matched exactly.
2022-03-05 16:10:43 +00:00
CalDescent
8545a8bf0d Automatically fetch metadata for all resources that have it. 2022-03-05 13:00:49 +00:00
CalDescent
f0136a5018 Include the external port when responding ArbitraryDataFileListRequests 2022-03-05 13:00:17 +00:00
CalDescent
6697b3376b Direct peer connections now use the on-demand data retrieved from file list requests, rather than the stale and incomplete ArbitraryPeerData. 2022-03-05 12:59:13 +00:00
CalDescent
ea785f79b8 Removed unnecessary synchronization 2022-03-04 19:02:30 +00:00
CalDescent
0352a09de7 New online accounts are now verified on the OnlineAccountsManager thread rather than on network threads. This is an attempt to reduce the amount of blocked network threads due to signature verification, and is necessary for the upcoming mempow addition. 2022-03-04 17:58:06 +00:00
CalDescent
5b4f15ab2e Transaction importing code moved to TransactionImporter controller class
As with online accounts, no logic changes other than moving transaction queue processing from the controller thread to its own dedicated thread.
2022-03-04 16:47:21 +00:00
CalDescent
fd37c2b76b Moved all online accounts code to a new OnlineAccountsManager controller class
There are no logic changes here other than moving performOnlineAccountsTasks() onto its own thread, so that it's not subject to anything that might be slowing down the main controller thread.
2022-03-04 16:24:04 +00:00
CalDescent
924aa05681 Optimized peer lists
- Removed synchronization from connectedPeers, and replaced it with an unmodifiableList.
- Added additional immuatable caches: handshakedPeers and outboundHandshakedPeers

This should greatly reduce the amount of time spent waiting around for access to the connectedPeers array, since it is now immediately accessible without needing to obtain a lock. It also removes calls to stream() which were consuming large amounts of CPU to constantly filter the connected peers down to a list of handshaked peers.

Thanks to @catbref for these great suggestions.
2022-03-04 15:14:12 +00:00
CalDescent
84b42210f1 Use ArbitraryDataFileRequestThreads only - instead of reusing file list response threads. 2022-03-04 13:34:16 +00:00
CalDescent
941080c395 Rework of arbitraryDataFileHashResponses to use a list rather than a map (limited to 1000) items. Sort the list by routes with the lowest number of peer hops first, to try and prioritize those which are easiest and quickest to reach. 2022-03-04 13:33:17 +00:00
CalDescent
35d9a10cf4 Avoid logging if there are no remaining transaction signatures to validate. There was too much log spam, none of which was particularly useful. 2022-03-04 12:03:58 +00:00
CalDescent
7c181379b4 Added more granularity to logging, to differentiate between signature validation and general processing/importing, as well as showing counts of the transactions being processed in each round. 2022-03-04 11:12:23 +00:00
CalDescent
f9576d8afb Further optimizations to Controller.processIncomingTransactionsQueue()
- Signature validation is now able to run concurrently with synchronization, to reduce the chances of the queue building up, and to speed up the propagation of new transactions. There's no need to break out of the loop - or avoid looping in the first place - since signatures can be validated without holding the blockchain lock.
- A blockchain lock isn't even attempted if a sync request is pending.
2022-03-04 11:05:58 +00:00
CalDescent
6a8a113fa1 Merge pull request #74 from catbref/presence-txns-removal
PRESENCE transactions changed to always fail signature validation
2022-03-04 10:33:11 +00:00
CalDescent
ef59c34165 Added missing "break" which was causing additional unnecessary debug logging. Originally introduced due to a merge conflict with the metadata branch. 2022-03-04 10:28:44 +00:00
CalDescent
a19e1f06c0 Merge pull request #73 from catbref/incoming-txns-rework
Reworking of Controller.processIncomingTransactionsQueue()
2022-03-04 09:45:29 +00:00
catbref
a9371f0a90 In Controller.processIncomingTransactionsQueue(), don't bother with 2nd-phase of locking blockchain and importing if there are no valid signature transactions to actually import 2022-03-03 20:32:27 +00:00
catbref
a7a94e49e8 PRESENCE transactions changed to always fail signature validation 2022-03-03 20:25:58 +00:00
catbref
affd100298 Reworking of Controller.processIncomingTransactionsQueue()
Main changes are:
* Check transaction signature validity in initial round, without blockchain lock
* Convert List of incoming transactions to Map so we can record whether we have validated transaction signature before to save rechecking effort
* Add invalid signature transactions to invalidUnconfirmedTransactions map with INVALID_TRANSACTION_RECHECK_INTERVAL expiry (~60min)
* Other minor changes related to List->Map change and Java object synchronization
2022-03-03 20:21:04 +00:00
CalDescent
fd6ec301a4 Updated AdvancedInstaller project for v3.2.0 2022-03-03 20:02:30 +00:00
CalDescent
5666e6084b Bump version to 3.2.0 2022-03-02 20:04:49 +00:00
CalDescent
69309c437e Tightened up the content security policy for non HTML files. 2022-03-01 20:36:34 +00:00
CalDescent
e392e4d344 Allow eval(), setTimeout(), etc, to enable various QDN sites to function correctly. The existing sandboxing should be locking this down enough already. Limited to .html and .htm files only. 2022-03-01 20:35:56 +00:00
CalDescent
bd53856927 Disabled auto fetching of metadata. To be re-enabled at a later date. 2022-03-01 20:26:09 +00:00
CalDescent
cbd1018ecf Allow identical data to be published if the metadata differs. 2022-03-01 20:22:47 +00:00
CalDescent
46606152eb /arbitrary/metadata/* endpoint now returns ArbitraryResourceMetadata rather than a raw JSON string. 2022-03-01 20:22:20 +00:00
CalDescent
e6f93e0a08 Added categoryName to ArbitraryResourceMetadata, along with the existing category ID 2022-03-01 20:19:08 +00:00
CalDescent
8d81f1822f Merge branch 'master' into qdn-metadata
# Conflicts:
#	src/main/java/org/qortal/controller/Controller.java
#	src/main/java/org/qortal/network/message/Message.java
2022-02-28 20:10:39 +00:00
CalDescent
5903607363 Merge pull request #72 from catbref/presence-v2
Presence v2
2022-02-27 22:01:59 +00:00
catbref
590a8f52db Remove future work comment from Controller 2022-02-27 16:57:26 +00:00
catbref
ecac47d1bc Also notify TradePresenceWebsocket (using TradePresenceEvent) when bridging old PRESENCE txns 2022-02-27 16:56:17 +00:00
catbref
3b477ef637 Fix JAXB marshalling error (duplicate tradeAddress) in TradePresenceWebSocket. No need to send signature. Make sure publicKey is sent in Base58, not Base64. 2022-02-27 16:56:17 +00:00
catbref
e2ef5b2ef3 Missed change from last commit: incorrect logic in TradePresenceWebSocket 2022-02-27 16:56:17 +00:00
catbref
1d59feeb72 Created /websockets/crosschain/tradepresence to replace /websockets/presence 2022-02-27 16:55:30 +00:00
catbref
c53dd31765 Tidy up of trade presence timestamp generation & checking. Added tests. Renamed "online trades" to "trade presences" 2022-02-27 16:54:42 +00:00
catbref
4c02081992 Tidy up TradeBot presence logging. Decorate API endpoints /crosschain/tradeoffers and /crosschain/trade with presence expiry timestamps 2022-02-27 16:54:42 +00:00
catbref
cb57af3c53 Bugfixes to online trade sigs + bridging from PRESENCE transactions 2022-02-27 16:54:42 +00:00
catbref
01d810fc00 Initial effort at migrating PRESENCE transactions to dedicated network messages 2022-02-27 16:54:42 +00:00
CalDescent
8c2a9279ee Return metadata in various /arbitrary APIs if the "includemetadata" parameter is included.
This is very inefficient and will soon be replaced with dedicated ArbitraryResources / ArbitraryMetadata tables. But this is acceptable in the short term, especially if limit and offset are used.
2022-02-27 09:09:18 +00:00
CalDescent
0d65448f3d Request all metadata automatically. 2022-02-27 08:20:39 +00:00
CalDescent
9da2b3c11a Don't respond to file list requests with just the metadata file.
We have the separate metadata protocol for this now.
2022-02-27 07:28:11 +00:00
CalDescent
95400da977 Fixed typo in various tests (copy and paste error) 2022-02-26 22:10:55 +00:00
CalDescent
dc41dc4c69 Tags now use an array of strings, rather than a single string. 2022-02-26 22:09:07 +00:00
CalDescent
a5c11d4c23 Reduced "Ignoring hash list request" logs from DEBUG to TRACE 2022-02-26 16:10:44 +00:00
CalDescent
878394535e Improvements relating to fetching metadata
- Rate limiter is disabled when using the API
- fetchArbitraryMetadata() returns the actual metadata content rather than a boolean
- Exceptions are thrown on certain errors, rather than returning null
2022-02-26 16:10:26 +00:00
CalDescent
35dba27a55 Fixed issue due to not updating arbitraryMetadataRequests when receiving the metadata file. 2022-02-26 16:07:06 +00:00
CalDescent
f22ad13fa9 Merge branch 'master' into qdn-metadata
This involved a slight rewrite to remove the "includeMetadataOnly" boolean. Metadata is now always excluded, otherwise it complicates the caching too much.

# Conflicts:
#	src/main/java/org/qortal/api/resource/ArbitraryResource.java
#	src/main/java/org/qortal/controller/arbitrary/ArbitraryDataStorageManager.java
2022-02-26 14:39:20 +00:00
CalDescent
aa2e5cb87b Merge branch 'hosted-resources-search' 2022-02-26 14:05:52 +00:00
CalDescent
7740f3da7e Small formatting tweaks, for consistency with existing code. 2022-02-26 14:05:28 +00:00
CalDescent
badb576991 Fixed exception when identifier is null. Also handling null names as this may be a future scenario. 2022-02-26 14:04:35 +00:00
CalDescent
c65a63fc7e Fixed "query" parameter error in swagger documentation 2022-02-26 13:59:53 +00:00
proto
782904a971 improvement to the search on hosted resources
1) use the cached version instead of rescanning all the files
2) separating the loading (which include files scanning) and listing logic
2022-02-22 17:54:08 +01:00
proto
a3753c01bc Add search functionality to hosted resources 2022-02-22 15:50:46 +01:00
CalDescent
acddf36467 Handle missing includeMetadata parameter. 2022-02-13 19:27:12 +00:00
CalDescent
166d32032a Fixed message IDs. 2022-02-13 19:22:20 +00:00
CalDescent
e4238a62c9 Exclude metadata-only transactions in the data management page (but added an API parameter to allow them to optionally be included).
This ensures that the list will only show resources where there is at least 1 chunk.
2022-02-13 19:21:16 +00:00
CalDescent
ad9c466712 Fall back to UNCATEGORIZED if the parsed category doesn't match any available categories.
This allows for deletion of categories, as the resources will just move into UNCATEGORIZED until they are next updated.
2022-02-13 18:10:56 +00:00
CalDescent
a3d31bbaf1 Category updates based on feedback so far. 2022-02-13 17:56:47 +00:00
CalDescent
4821139501 Merge branch 'master' into qdn-metadata
# Conflicts:
#	src/main/java/org/qortal/controller/arbitrary/ArbitraryDataFileListManager.java
2022-02-13 15:50:12 +00:00
CalDescent
dedf65bd4b Added initial protocol methods for metadata requests and forwarding. Not tested yet. 2022-01-22 20:24:37 +00:00
CalDescent
a79ed02ccf Added initial (unfinished) category list, as well as the GET /arbitrary/categories API, and converted the category field from a string to an enum 2022-01-22 12:11:16 +00:00
CalDescent
79f87babdf Limit the metadata string lengths 2022-01-21 22:52:31 +00:00
CalDescent
f296d5138b Allow metadata to optionally be included with any arbitrary resource. 2022-01-21 21:14:28 +00:00
catbref
eb9b94b9c6 Add Qortal AT FunctionCodes for getting account level / blocks minted + tests 2021-12-04 16:36:05 +00:00
153 changed files with 10812 additions and 2970 deletions

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{5FC8DCC3-BF9C-4D72-8C6D-940340ACD1B8} 1049:{1DEF14AB-2397-4517-B3C8-13221B921753} 2052:{B9E3C1DF-C92D-440A-9A21-869582F8585F} 2057:{91D69E7B-CA7D-4449-8E8A-F22DCEA546FC} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{DEA09B3D-AFFA-409F-B208-E148E9A9005D} 1049:{79180B3D-8A6B-4DED-BD60-1A58F941E1DE} 2052:{90F65B96-22CD-41FA-82B0-E65183EA1EF9} 2057:{AB4872AC-E794-42BD-9305-8DFD06243A88} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="3.1.1" Type="32"/>
<ROW Property="ProductVersion" Value="3.2.3" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{42F5EC19-E46F-4299-B9F7-6E1112F6E4FB}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{63FD92A7-4AE2-46A0-9B83-EB27DA636C65}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>

21
pom.xml
View File

@@ -3,11 +3,11 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>3.1.1</version>
<version>3.2.3</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
<altcoinj.version>bf9fb80</altcoinj.version>
<altcoinj.version>6628cfd</altcoinj.version>
<bitcoinj.version>0.15.10</bitcoinj.version>
<bouncycastle.version>1.64</bouncycastle.version>
<build.timestamp>${maven.build.timestamp}</build.timestamp>
@@ -21,6 +21,8 @@
<dagger.version>1.2.2</dagger.version>
<guava.version>28.1-jre</guava.version>
<hsqldb.version>2.5.1</hsqldb.version>
<homoglyph.version>1.2.1</homoglyph.version>
<icu4j.version>70.1</icu4j.version>
<upnp.version>1.1</upnp.version>
<jersey.version>2.29.1</jersey.version>
<jetty.version>9.4.29.v20200521</jetty.version>
@@ -442,7 +444,7 @@
</dependency>
<!-- For Litecoin, etc. support, requires bitcoinj -->
<dependency>
<groupId>com.github.jjos2372</groupId>
<groupId>com.github.qortal</groupId>
<artifactId>altcoinj</artifactId>
<version>${altcoinj.version}</version>
</dependency>
@@ -568,7 +570,18 @@
<dependency>
<groupId>net.codebox</groupId>
<artifactId>homoglyph</artifactId>
<version>1.2.0</version>
<version>${homoglyph.version}</version>
</dependency>
<!-- Unicode support -->
<dependency>
<groupId>com.ibm.icu</groupId>
<artifactId>icu4j</artifactId>
<version>${icu4j.version}</version>
</dependency>
<dependency>
<groupId>com.ibm.icu</groupId>
<artifactId>icu4j-charset</artifactId>
<version>${icu4j.version}</version>
</dependency>
<!-- Jetty -->
<dependency>

View File

@@ -205,6 +205,12 @@ public class Account {
return false;
}
/** Returns account's blockMinted (0+) or null if account not found in repository. */
public Integer getBlocksMinted() throws DataException {
return this.repository.getAccountRepository().getMintedBlockCount(this.address);
}
/** Returns whether account can build reward-shares.
* <p>
* To be able to create reward-shares, the account needs to pass at least one of these tests:<br>

View File

@@ -40,13 +40,7 @@ import org.glassfish.jersey.server.ResourceConfig;
import org.glassfish.jersey.servlet.ServletContainer;
import org.qortal.api.resource.AnnotationPostProcessor;
import org.qortal.api.resource.ApiDefinition;
import org.qortal.api.websocket.ActiveChatsWebSocket;
import org.qortal.api.websocket.AdminStatusWebSocket;
import org.qortal.api.websocket.BlocksWebSocket;
import org.qortal.api.websocket.ChatMessagesWebSocket;
import org.qortal.api.websocket.PresenceWebSocket;
import org.qortal.api.websocket.TradeBotWebSocket;
import org.qortal.api.websocket.TradeOffersWebSocket;
import org.qortal.api.websocket.*;
import org.qortal.settings.Settings;
public class ApiService {
@@ -212,6 +206,9 @@ public class ApiService {
context.addServlet(ChatMessagesWebSocket.class, "/websockets/chat/messages");
context.addServlet(TradeOffersWebSocket.class, "/websockets/crosschain/tradeoffers");
context.addServlet(TradeBotWebSocket.class, "/websockets/crosschain/tradebot");
context.addServlet(TradePresenceWebSocket.class, "/websockets/crosschain/tradepresence");
// Deprecated
context.addServlet(PresenceWebSocket.class, "/websockets/presence");
// Start server

View File

@@ -24,9 +24,9 @@ public class NodeStatus {
this.isMintingPossible = Controller.getInstance().isMintingPossible();
this.syncPercent = Synchronizer.getInstance().getSyncPercent();
this.isSynchronizing = this.syncPercent != null;
this.isSynchronizing = Synchronizer.getInstance().isSynchronizing();
this.numberOfConnections = Network.getInstance().getHandshakedPeers().size();
this.numberOfConnections = Network.getInstance().getImmutableHandshakedPeers().size();
this.height = Controller.getInstance().getChainHeight();
}

View File

@@ -0,0 +1,29 @@
package org.qortal.api.model.crosschain;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import io.swagger.v3.oas.annotations.media.Schema;
@XmlAccessorType(XmlAccessType.FIELD)
public class RavencoinSendRequest {
@Schema(description = "Ravencoin BIP32 extended private key", example = "tprv___________________________________________________________________________________________________________")
public String xprv58;
@Schema(description = "Recipient's Ravencoin address ('legacy' P2PKH only)", example = "1RvnCoinEaterAddressDontSendf59kuE")
public String receivingAddress;
@Schema(description = "Amount of RVN to send", type = "number")
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public long ravencoinAmount;
@Schema(description = "Transaction fee per byte (optional). Default is 0.00000100 RVN (100 sats) per byte", example = "0.00000100", type = "number")
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public Long feePerByte;
public RavencoinSendRequest() {
}
}

View File

@@ -30,7 +30,7 @@ import org.qortal.api.Security;
import org.qortal.api.model.ApiOnlineAccount;
import org.qortal.api.model.RewardShareKeyRequest;
import org.qortal.asset.Asset;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.account.RewardShareData;
@@ -156,7 +156,7 @@ public class AddressesResource {
)
@ApiErrors({ApiError.PUBLIC_KEY_NOT_FOUND, ApiError.REPOSITORY_ISSUE})
public List<ApiOnlineAccount> getOnlineAccounts() {
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
// Map OnlineAccountData entries to OnlineAccount via reward-share data
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -191,7 +191,7 @@ public class AddressesResource {
)
@ApiErrors({ApiError.PUBLIC_KEY_NOT_FOUND, ApiError.REPOSITORY_ISSUE})
public List<OnlineAccountLevel> getOnlineAccountsByLevel() {
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
try (final Repository repository = RepositoryManager.getRepository()) {
List<OnlineAccountLevel> onlineAccountLevels = new ArrayList<>();

View File

@@ -35,7 +35,6 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.appender.RollingFileAppender;
import org.checkerframework.checker.units.qual.A;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.*;
@@ -514,7 +513,7 @@ public class AdminResource {
PeerAddress peerAddress = PeerAddress.fromString(targetPeerAddress);
InetSocketAddress resolvedAddress = peerAddress.toSocketAddress();
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
Peer targetPeer = peers.stream().filter(peer -> peer.getResolvedAddress().equals(resolvedAddress)).findFirst().orElse(null);
if (targetPeer == null)
@@ -589,10 +588,6 @@ public class AdminResource {
public String importRepository(@HeaderParam(Security.API_KEY_HEADER) String apiKey, String filename) {
Security.checkApiCallAllowed(request);
// Hard-coded because it's too dangerous to allow user-supplied filenames in weaker security contexts
if (Settings.getInstance().getApiKey() == null)
filename = "qortal-backup/TradeBotStates.json";
try (final Repository repository = RepositoryManager.getRepository()) {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();

View File

@@ -12,6 +12,7 @@ import io.swagger.v3.oas.annotations.security.SecurityRequirement;
import io.swagger.v3.oas.annotations.tags.Tag;
import java.io.*;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
@@ -33,13 +34,14 @@ import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
import org.qortal.arbitrary.*;
import org.qortal.arbitrary.ArbitraryDataFile.ResourceIdType;
import org.qortal.arbitrary.exception.MissingDataException;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Category;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.controller.arbitrary.ArbitraryDataStorageManager;
import org.qortal.controller.arbitrary.ArbitraryMetadataManager;
import org.qortal.data.account.AccountData;
import org.qortal.data.arbitrary.ArbitraryResourceInfo;
import org.qortal.data.arbitrary.ArbitraryResourceNameInfo;
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
import org.qortal.data.arbitrary.*;
import org.qortal.data.naming.NameData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
@@ -88,7 +90,8 @@ public class ArbitraryResource {
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse,
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus) {
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus,
@Parameter(description = "Include metadata") @QueryParam("includemetadata") Boolean includeMetadata) {
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -110,9 +113,12 @@ public class ArbitraryResource {
return new ArrayList<>();
}
if (includeStatus != null && includeStatus == true) {
if (includeStatus != null && includeStatus) {
resources = this.addStatusToResources(resources);
}
if (includeMetadata != null && includeMetadata) {
resources = this.addMetadataToResources(resources);
}
return resources;
@@ -140,7 +146,8 @@ public class ArbitraryResource {
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse,
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus) {
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus,
@Parameter(description = "Include metadata") @QueryParam("includemetadata") Boolean includeMetadata) {
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -153,9 +160,12 @@ public class ArbitraryResource {
return new ArrayList<>();
}
if (includeStatus != null && includeStatus == true) {
if (includeStatus != null && includeStatus) {
resources = this.addStatusToResources(resources);
}
if (includeMetadata != null && includeMetadata) {
resources = this.addMetadataToResources(resources);
}
return resources;
@@ -182,7 +192,8 @@ public class ArbitraryResource {
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse,
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus) {
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus,
@Parameter(description = "Include metadata") @QueryParam("includemetadata") Boolean includeMetadata) {
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -206,9 +217,13 @@ public class ArbitraryResource {
List<ArbitraryResourceInfo> resources = repository.getArbitraryRepository()
.getArbitraryResources(service, identifier, name, defaultRes, null, null, reverse);
if (includeStatus != null && includeStatus == true) {
if (includeStatus != null && includeStatus) {
resources = this.addStatusToResources(resources);
}
if (includeMetadata != null && includeMetadata) {
resources = this.addMetadataToResources(resources);
}
creatorName.resources = resources;
}
}
@@ -390,6 +405,28 @@ public class ArbitraryResource {
return Settings.getInstance().isRelayModeEnabled();
}
@GET
@Path("/categories")
@Operation(
summary = "List arbitrary transaction categories",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.APPLICATION_JSON, schema = @Schema(implementation = ArbitraryCategoryInfo.class))
)
}
)
@ApiErrors({ApiError.REPOSITORY_ISSUE})
public List<ArbitraryCategoryInfo> getCategories() {
List<ArbitraryCategoryInfo> categories = new ArrayList<>();
for (Category category : Category.values()) {
ArbitraryCategoryInfo arbitraryCategory = new ArbitraryCategoryInfo();
arbitraryCategory.id = category.toString();
arbitraryCategory.name = category.getName();
categories.add(arbitraryCategory);
}
return categories;
}
@GET
@Path("/hosted/transactions")
@Operation(
@@ -431,15 +468,24 @@ public class ArbitraryResource {
public List<ArbitraryResourceInfo> getHostedResources(
@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@Parameter(description = "Include status") @QueryParam("includestatus") Boolean includeStatus,
@Parameter(description = "Include metadata") @QueryParam("includemetadata") Boolean includeMetadata,
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset) {
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@QueryParam("query") String query) {
Security.checkApiCallAllowed(request);
List<ArbitraryResourceInfo> resources = new ArrayList<>();
try (final Repository repository = RepositoryManager.getRepository()) {
List<ArbitraryTransactionData> transactionDataList;
if (query == null || query.equals("")) {
transactionDataList = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, limit, offset);
} else {
transactionDataList = ArbitraryDataStorageManager.getInstance().searchHostedTransactions(repository,query, limit, offset);
}
List<ArbitraryTransactionData> transactionDataList = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, limit, offset);
for (ArbitraryTransactionData transactionData : transactionDataList) {
ArbitraryResourceInfo arbitraryResourceInfo = new ArbitraryResourceInfo();
arbitraryResourceInfo.name = transactionData.getName();
@@ -450,9 +496,12 @@ public class ArbitraryResource {
}
}
if (includeStatus != null && includeStatus == true) {
if (includeStatus != null && includeStatus) {
resources = this.addStatusToResources(resources);
}
if (includeMetadata != null && includeMetadata) {
resources = this.addMetadataToResources(resources);
}
return resources;
@@ -461,6 +510,8 @@ public class ArbitraryResource {
}
}
@DELETE
@Path("/resource/{service}/{name}/{identifier}")
@Operation(
@@ -624,6 +675,54 @@ public class ArbitraryResource {
}
// Metadata
@GET
@Path("/metadata/{service}/{name}/{identifier}")
@Operation(
summary = "Fetch raw metadata from resource with supplied service, name, identifier, and relative path",
responses = {
@ApiResponse(
description = "Path to file structure containing requested data",
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = ArbitraryDataTransactionMetadata.class
)
)
)
}
)
@SecurityRequirement(name = "apiKey")
public ArbitraryResourceMetadata getMetadata(@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@PathParam("service") Service service,
@PathParam("name") String name,
@PathParam("identifier") String identifier) {
Security.checkApiCallAllowed(request);
ArbitraryDataResource resource = new ArbitraryDataResource(name, ResourceIdType.NAME, service, identifier);
try {
ArbitraryDataTransactionMetadata transactionMetadata = ArbitraryMetadataManager.getInstance().fetchMetadata(resource, false);
if (transactionMetadata != null) {
ArbitraryResourceMetadata resourceMetadata = ArbitraryResourceMetadata.fromTransactionMetadata(transactionMetadata);
if (resourceMetadata != null) {
return resourceMetadata;
}
else {
// The metadata file doesn't contain title, description, category, or tags
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FILE_NOT_FOUND);
}
}
} catch (IllegalArgumentException e) {
// No metadata exists for this resource
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.FILE_NOT_FOUND, e.getMessage());
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
}
// Upload data at supplied path
@@ -656,6 +755,10 @@ public class ArbitraryResource {
public String post(@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@PathParam("service") String serviceString,
@PathParam("name") String name,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String path) {
Security.checkApiCallAllowed(request);
@@ -663,7 +766,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Path not supplied");
}
return this.upload(Service.valueOf(serviceString), name, null, path, null, null, false);
return this.upload(Service.valueOf(serviceString), name, null, path, null, null, false,
title, description, tags, category);
}
@POST
@@ -696,6 +800,10 @@ public class ArbitraryResource {
@PathParam("service") String serviceString,
@PathParam("name") String name,
@PathParam("identifier") String identifier,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String path) {
Security.checkApiCallAllowed(request);
@@ -703,7 +811,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Path not supplied");
}
return this.upload(Service.valueOf(serviceString), name, identifier, path, null, null, false);
return this.upload(Service.valueOf(serviceString), name, identifier, path, null, null, false,
title, description, tags, category);
}
@@ -737,6 +846,10 @@ public class ArbitraryResource {
public String postBase64EncodedData(@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@PathParam("service") String serviceString,
@PathParam("name") String name,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String base64) {
Security.checkApiCallAllowed(request);
@@ -744,7 +857,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data not supplied");
}
return this.upload(Service.valueOf(serviceString), name, null, null, null, base64, false);
return this.upload(Service.valueOf(serviceString), name, null, null, null, base64, false,
title, description, tags, category);
}
@POST
@@ -775,6 +889,10 @@ public class ArbitraryResource {
@PathParam("service") String serviceString,
@PathParam("name") String name,
@PathParam("identifier") String identifier,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String base64) {
Security.checkApiCallAllowed(request);
@@ -782,7 +900,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data not supplied");
}
return this.upload(Service.valueOf(serviceString), name, identifier, null, null, base64, false);
return this.upload(Service.valueOf(serviceString), name, identifier, null, null, base64, false,
title, description, tags, category);
}
@@ -815,6 +934,10 @@ public class ArbitraryResource {
public String postZippedData(@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@PathParam("service") String serviceString,
@PathParam("name") String name,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String base64Zip) {
Security.checkApiCallAllowed(request);
@@ -822,7 +945,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data not supplied");
}
return this.upload(Service.valueOf(serviceString), name, null, null, null, base64Zip, true);
return this.upload(Service.valueOf(serviceString), name, null, null, null, base64Zip, true,
title, description, tags, category);
}
@POST
@@ -853,6 +977,10 @@ public class ArbitraryResource {
@PathParam("service") String serviceString,
@PathParam("name") String name,
@PathParam("identifier") String identifier,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String base64Zip) {
Security.checkApiCallAllowed(request);
@@ -860,7 +988,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data not supplied");
}
return this.upload(Service.valueOf(serviceString), name, identifier, null, null, base64Zip, true);
return this.upload(Service.valueOf(serviceString), name, identifier, null, null, base64Zip, true,
title, description, tags, category);
}
@@ -896,6 +1025,10 @@ public class ArbitraryResource {
public String postString(@HeaderParam(Security.API_KEY_HEADER) String apiKey,
@PathParam("service") String serviceString,
@PathParam("name") String name,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String string) {
Security.checkApiCallAllowed(request);
@@ -903,7 +1036,8 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data string not supplied");
}
return this.upload(Service.valueOf(serviceString), name, null, null, string, null, false);
return this.upload(Service.valueOf(serviceString), name, null, null, string, null, false,
title, description, tags, category);
}
@POST
@@ -936,6 +1070,10 @@ public class ArbitraryResource {
@PathParam("service") String serviceString,
@PathParam("name") String name,
@PathParam("identifier") String identifier,
@QueryParam("title") String title,
@QueryParam("description") String description,
@QueryParam("tags") List<String> tags,
@QueryParam("category") Category category,
String string) {
Security.checkApiCallAllowed(request);
@@ -943,13 +1081,16 @@ public class ArbitraryResource {
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.INVALID_CRITERIA, "Data string not supplied");
}
return this.upload(Service.valueOf(serviceString), name, identifier, null, string, null, false);
return this.upload(Service.valueOf(serviceString), name, identifier, null, string, null, false,
title, description, tags, category);
}
// Shared methods
private String upload(Service service, String name, String identifier, String path, String string, String base64, boolean zipped) {
private String upload(Service service, String name, String identifier,
String path, String string, String base64, boolean zipped,
String title, String description, List<String> tags, Category category) {
// Fetch public key from registered name
try (final Repository repository = RepositoryManager.getRepository()) {
NameData nameData = repository.getNameRepository().fromName(name);
@@ -1013,7 +1154,8 @@ public class ArbitraryResource {
try {
ArbitraryDataTransactionBuilder transactionBuilder = new ArbitraryDataTransactionBuilder(
repository, publicKey58, Paths.get(path), name, null, service, identifier
repository, publicKey58, Paths.get(path), name, null, service, identifier,
title, description, tags, category
);
transactionBuilder.build();
@@ -1124,12 +1266,34 @@ public class ArbitraryResource {
private List<ArbitraryResourceInfo> addStatusToResources(List<ArbitraryResourceInfo> resources) {
// Determine and add the status of each resource
List<ArbitraryResourceInfo> updatedResources = new ArrayList<>();
for (ArbitraryResourceInfo resourceInfo : resources) {
try {
ArbitraryDataResource resource = new ArbitraryDataResource(resourceInfo.name, ResourceIdType.NAME,
resourceInfo.service, resourceInfo.identifier);
ArbitraryResourceStatus status = resource.getStatus(true);
if (status != null) {
resourceInfo.status = status;
}
updatedResources.add(resourceInfo);
} catch (Exception e) {
// Catch and log all exceptions, since some systems are experiencing 500 errors when including statuses
LOGGER.info("Caught exception when adding status to resource %s: %s", resourceInfo, e.toString());
}
}
return updatedResources;
}
private List<ArbitraryResourceInfo> addMetadataToResources(List<ArbitraryResourceInfo> resources) {
// Add metadata fields to each resource if they exist
List<ArbitraryResourceInfo> updatedResources = new ArrayList<>();
for (ArbitraryResourceInfo resourceInfo : resources) {
ArbitraryDataResource resource = new ArbitraryDataResource(resourceInfo.name, ResourceIdType.NAME,
resourceInfo.service, resourceInfo.identifier);
ArbitraryResourceStatus status = resource.getStatus(true);
if (status != null) {
resourceInfo.status = status;
ArbitraryDataTransactionMetadata transactionMetadata = resource.getLatestTransactionMetadata();
ArbitraryResourceMetadata resourceMetadata = ArbitraryResourceMetadata.fromTransactionMetadata(transactionMetadata);
if (resourceMetadata != null) {
resourceInfo.metadata = resourceMetadata;
}
updatedResources.add(resourceInfo);
}

View File

@@ -0,0 +1,177 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.ArraySchema;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.security.SecurityRequirement;
import io.swagger.v3.oas.annotations.tags.Tag;
import java.util.List;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.HeaderParam;
import javax.ws.rs.POST;
import javax.ws.rs.Path;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.bitcoinj.core.Transaction;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.crosschain.RavencoinSendRequest;
import org.qortal.crosschain.Ravencoin;
import org.qortal.crosschain.ForeignBlockchainException;
import org.qortal.crosschain.SimpleTransaction;
@Path("/crosschain/rvn")
@Tag(name = "Cross-Chain (Ravencoin)")
public class CrossChainRavencoinResource {
@Context
HttpServletRequest request;
@POST
@Path("/walletbalance")
@Operation(
summary = "Returns RVN balance for hierarchical, deterministic BIP32 wallet",
description = "Supply BIP32 'm' private/public key in base58, starting with 'xprv'/'xpub' for mainnet, 'tprv'/'tpub' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string",
description = "BIP32 'm' private/public key in base58",
example = "tpubD6NzVbkrYhZ4XTPc4btCZ6SMgn8CxmWkj6VBVZ1tfcJfMq4UwAjZbG8U74gGSypL9XBYk2R2BLbDBe8pcEyBKM1edsGQEPKXNbEskZozeZc"
)
)
),
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string", description = "balance (satoshis)"))
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
@SecurityRequirement(name = "apiKey")
public String getRavencoinWalletBalance(@HeaderParam(Security.API_KEY_HEADER) String apiKey, String key58) {
Security.checkApiCallAllowed(request);
Ravencoin ravencoin = Ravencoin.getInstance();
if (!ravencoin.isValidDeterministicKey(key58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
try {
Long balance = ravencoin.getWalletBalanceFromTransactions(key58);
if (balance == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
return balance.toString();
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
}
}
@POST
@Path("/wallettransactions")
@Operation(
summary = "Returns transactions for hierarchical, deterministic BIP32 wallet",
description = "Supply BIP32 'm' private/public key in base58, starting with 'xprv'/'xpub' for mainnet, 'tprv'/'tpub' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.TEXT_PLAIN,
schema = @Schema(
type = "string",
description = "BIP32 'm' private/public key in base58",
example = "tpubD6NzVbkrYhZ4XTPc4btCZ6SMgn8CxmWkj6VBVZ1tfcJfMq4UwAjZbG8U74gGSypL9XBYk2R2BLbDBe8pcEyBKM1edsGQEPKXNbEskZozeZc"
)
)
),
responses = {
@ApiResponse(
content = @Content(array = @ArraySchema( schema = @Schema( implementation = SimpleTransaction.class ) ) )
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
@SecurityRequirement(name = "apiKey")
public List<SimpleTransaction> getRavencoinWalletTransactions(@HeaderParam(Security.API_KEY_HEADER) String apiKey, String key58) {
Security.checkApiCallAllowed(request);
Ravencoin ravencoin = Ravencoin.getInstance();
if (!ravencoin.isValidDeterministicKey(key58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
try {
return ravencoin.getWalletTransactions(key58);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
}
}
@POST
@Path("/send")
@Operation(
summary = "Sends RVN from hierarchical, deterministic BIP32 wallet to specific address",
description = "Currently only supports 'legacy' P2PKH Ravencoin addresses. Supply BIP32 'm' private key in base58, starting with 'xprv' for mainnet, 'tprv' for testnet",
requestBody = @RequestBody(
required = true,
content = @Content(
mediaType = MediaType.APPLICATION_JSON,
schema = @Schema(
implementation = RavencoinSendRequest.class
)
)
),
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string", description = "transaction hash"))
)
}
)
@ApiErrors({ApiError.INVALID_PRIVATE_KEY, ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE})
@SecurityRequirement(name = "apiKey")
public String sendBitcoin(@HeaderParam(Security.API_KEY_HEADER) String apiKey, RavencoinSendRequest ravencoinSendRequest) {
Security.checkApiCallAllowed(request);
if (ravencoinSendRequest.ravencoinAmount <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
if (ravencoinSendRequest.feePerByte != null && ravencoinSendRequest.feePerByte <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
Ravencoin ravencoin = Ravencoin.getInstance();
if (!ravencoin.isValidAddress(ravencoinSendRequest.receivingAddress))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
if (!ravencoin.isValidDeterministicKey(ravencoinSendRequest.xprv58))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PRIVATE_KEY);
Transaction spendTransaction = ravencoin.buildSpend(ravencoinSendRequest.xprv58,
ravencoinSendRequest.receivingAddress,
ravencoinSendRequest.ravencoinAmount,
ravencoinSendRequest.feePerByte);
if (spendTransaction == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_BALANCE_ISSUE);
try {
ravencoin.broadcastTransaction(spendTransaction);
} catch (ForeignBlockchainException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.FOREIGN_BLOCKCHAIN_NETWORK_ISSUE);
}
return spendTransaction.getTxId().toString();
}
}

View File

@@ -25,6 +25,7 @@ import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainCancelRequest;
import org.qortal.api.model.CrossChainTradeSummary;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crosschain.ACCT;
import org.qortal.crosschain.AcctMode;
@@ -120,6 +121,8 @@ public class CrossChainResource {
crossChainTrades = crossChainTrades.subList(0, upperLimit);
}
crossChainTrades.stream().forEach(CrossChainResource::decorateTradeDataWithPresence);
return crossChainTrades;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
@@ -151,7 +154,11 @@ public class CrossChainResource {
if (acct == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
return acct.populateTradeData(repository, atData);
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atData);
decorateTradeDataWithPresence(crossChainTradeData);
return crossChainTradeData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -486,4 +493,7 @@ public class CrossChainResource {
}
}
private static void decorateTradeDataWithPresence(CrossChainTradeData crossChainTradeData) {
TradeBot.getInstance().decorateTradeDataWithPresence(crossChainTradeData);
}
}

View File

@@ -98,7 +98,15 @@ public class GroupsResource {
ref = "reverse"
) @QueryParam("reverse") Boolean reverse) {
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getGroupRepository().getAllGroups(limit, offset, reverse);
List<GroupData> allGroupData = repository.getGroupRepository().getAllGroups(limit, offset, reverse);
allGroupData.forEach(groupData -> {
try {
groupData.memberCount = repository.getGroupRepository().countGroupMembers(groupData.getGroupId());
} catch (DataException e) {
// Exclude memberCount for this group
}
});
return allGroupData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -150,7 +158,15 @@ public class GroupsResource {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getGroupRepository().getGroupsWithMember(member);
List<GroupData> allGroupData = repository.getGroupRepository().getGroupsWithMember(member);
allGroupData.forEach(groupData -> {
try {
groupData.memberCount = repository.getGroupRepository().countGroupMembers(groupData.getGroupId());
} catch (DataException e) {
// Exclude memberCount for this group
}
});
return allGroupData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -177,6 +193,7 @@ public class GroupsResource {
if (groupData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.GROUP_UNKNOWN);
groupData.memberCount = repository.getGroupRepository().countGroupMembers(groupId);
return groupData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
@@ -922,4 +939,4 @@ public class GroupsResource {
}
}
}
}

View File

@@ -61,7 +61,7 @@ public class PeersResource {
}
)
public List<ConnectedPeer> getPeers() {
return Network.getInstance().getConnectedPeers().stream().map(ConnectedPeer::new).collect(Collectors.toList());
return Network.getInstance().getImmutableConnectedPeers().stream().map(ConnectedPeer::new).collect(Collectors.toList());
}
@GET
@@ -304,7 +304,7 @@ public class PeersResource {
PeerAddress peerAddress = PeerAddress.fromString(targetPeerAddress);
InetSocketAddress resolvedAddress = peerAddress.toSocketAddress();
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
Peer targetPeer = peers.stream().filter(peer -> peer.getResolvedAddress().equals(resolvedAddress)).findFirst().orElse(null);
if (targetPeer == null)
@@ -352,7 +352,7 @@ public class PeersResource {
public PeersSummary peersSummary() {
PeersSummary peersSummary = new PeersSummary();
List<Peer> connectedPeers = Network.getInstance().getConnectedPeers().stream().collect(Collectors.toList());
List<Peer> connectedPeers = Network.getInstance().getImmutableConnectedPeers().stream().collect(Collectors.toList());
for (Peer peer : connectedPeers) {
if (!peer.isOutbound()) {
peersSummary.inboundConnections++;

View File

@@ -74,7 +74,9 @@ public class RenderResource {
Method method = Method.PUT;
Compression compression = Compression.ZIP;
ArbitraryDataWriter arbitraryDataWriter = new ArbitraryDataWriter(Paths.get(directoryPath), null, Service.WEBSITE, null, method, compression);
ArbitraryDataWriter arbitraryDataWriter = new ArbitraryDataWriter(Paths.get(directoryPath),
null, Service.WEBSITE, null, method, compression,
null, null, null, null);
try {
arbitraryDataWriter.save();
} catch (IOException | DataException | InterruptedException | MissingDataException e) {
@@ -136,34 +138,38 @@ public class RenderResource {
@GET
@Path("/signature/{signature}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexBySignature(@PathParam("signature") String signature) {
public HttpServletResponse getIndexBySignature(@PathParam("signature") String signature,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, signature, Service.WEBSITE, null);
return this.get(signature, ResourceIdType.SIGNATURE, null, "/", null, "/render/signature", true, true);
return this.get(signature, ResourceIdType.SIGNATURE, null, "/", null, "/render/signature", true, true, theme);
}
@GET
@Path("/signature/{signature}/{path:.*}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathBySignature(@PathParam("signature") String signature, @PathParam("path") String inPath) {
public HttpServletResponse getPathBySignature(@PathParam("signature") String signature, @PathParam("path") String inPath,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, signature, Service.WEBSITE, null);
return this.get(signature, ResourceIdType.SIGNATURE, null, inPath,null, "/render/signature", true, true);
return this.get(signature, ResourceIdType.SIGNATURE, null, inPath,null, "/render/signature", true, true, theme);
}
@GET
@Path("/hash/{hash}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexByHash(@PathParam("hash") String hash58, @QueryParam("secret") String secret58) {
public HttpServletResponse getIndexByHash(@PathParam("hash") String hash58, @QueryParam("secret") String secret58,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, hash58, Service.WEBSITE, null);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, "/", secret58, "/render/hash", true, false);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, "/", secret58, "/render/hash", true, false, theme);
}
@GET
@Path("/hash/{hash}/{path:.*}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathByHash(@PathParam("hash") String hash58, @PathParam("path") String inPath,
@QueryParam("secret") String secret58) {
@QueryParam("secret") String secret58,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, hash58, Service.WEBSITE, null);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, inPath, secret58, "/render/hash", true, false);
return this.get(hash58, ResourceIdType.FILE_HASH, Service.WEBSITE, inPath, secret58, "/render/hash", true, false, theme);
}
@GET
@@ -171,29 +177,35 @@ public class RenderResource {
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getPathByName(@PathParam("service") Service service,
@PathParam("name") String name,
@PathParam("path") String inPath) {
@PathParam("path") String inPath,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, name, service, null);
String prefix = String.format("/render/%s", service);
return this.get(name, ResourceIdType.NAME, service, inPath, null, prefix, true, true);
return this.get(name, ResourceIdType.NAME, service, inPath, null, prefix, true, true, theme);
}
@GET
@Path("{service}/{name}")
@SecurityRequirement(name = "apiKey")
public HttpServletResponse getIndexByName(@PathParam("service") Service service,
@PathParam("name") String name) {
@PathParam("name") String name,
@QueryParam("theme") String theme) {
Security.requirePriorAuthorization(request, name, service, null);
String prefix = String.format("/render/%s", service);
return this.get(name, ResourceIdType.NAME, service, "/", null, prefix, true, true);
return this.get(name, ResourceIdType.NAME, service, "/", null, prefix, true, true, theme);
}
private HttpServletResponse get(String resourceId, ResourceIdType resourceIdType, Service service, String inPath,
String secret58, String prefix, boolean usePrefix, boolean async) {
String secret58, String prefix, boolean usePrefix, boolean async, String theme) {
ArbitraryDataRenderer renderer = new ArbitraryDataRenderer(resourceId, resourceIdType, service, inPath,
secret58, prefix, usePrefix, async, request, response, context);
if (theme != null) {
renderer.setTheme(theme);
}
return renderer.render();
}

View File

@@ -0,0 +1,137 @@
package org.qortal.api.websocket;
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.annotations.*;
import org.eclipse.jetty.websocket.servlet.WebSocketServletFactory;
import org.qortal.controller.Controller;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.data.network.TradePresenceData;
import org.qortal.event.Event;
import org.qortal.event.EventBus;
import org.qortal.event.Listener;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.io.IOException;
import java.io.StringWriter;
import java.util.*;
@WebSocket
@SuppressWarnings("serial")
public class TradePresenceWebSocket extends ApiWebSocket implements Listener {
/** Map key is public key in base58, map value is trade presence */
private static final Map<String, TradePresenceData> currentEntries = Collections.synchronizedMap(new HashMap<>());
@Override
public void configure(WebSocketServletFactory factory) {
factory.register(TradePresenceWebSocket.class);
populateCurrentInfo();
EventBus.INSTANCE.addListener(this::listen);
}
@Override
public void listen(Event event) {
// XXX - Suggest we change this to something like Synchronizer.NewChainTipEvent?
// We use NewBlockEvent as a proxy for 1-minute timer
if (!(event instanceof TradeBot.TradePresenceEvent) && !(event instanceof Controller.NewBlockEvent))
return;
removeOldEntries();
if (event instanceof Controller.NewBlockEvent)
// We only wanted a chance to cull old entries
return;
TradePresenceData tradePresence = ((TradeBot.TradePresenceEvent) event).getTradePresenceData();
boolean somethingChanged = mergePresence(tradePresence);
if (!somethingChanged)
// nothing changed
return;
List<TradePresenceData> tradePresences = Collections.singletonList(tradePresence);
// Notify sessions
for (Session session : getSessions()) {
sendTradePresences(session, tradePresences);
}
}
@OnWebSocketConnect
@Override
public void onWebSocketConnect(Session session) {
Map<String, List<String>> queryParams = session.getUpgradeRequest().getParameterMap();
List<TradePresenceData> tradePresences;
synchronized (currentEntries) {
tradePresences = List.copyOf(currentEntries.values());
}
if (!sendTradePresences(session, tradePresences)) {
session.close(4002, "websocket issue");
return;
}
super.onWebSocketConnect(session);
}
@OnWebSocketClose
@Override
public void onWebSocketClose(Session session, int statusCode, String reason) {
// clean up
super.onWebSocketClose(session, statusCode, reason);
}
@OnWebSocketError
public void onWebSocketError(Session session, Throwable throwable) {
/* ignored */
}
@OnWebSocketMessage
public void onWebSocketMessage(Session session, String message) {
/* ignored */
}
private boolean sendTradePresences(Session session, List<TradePresenceData> tradePresences) {
try {
StringWriter stringWriter = new StringWriter();
marshall(stringWriter, tradePresences);
String output = stringWriter.toString();
session.getRemote().sendStringByFuture(output);
} catch (IOException e) {
// No output this time?
return false;
}
return true;
}
private static void populateCurrentInfo() {
// We want ALL trade presences
TradeBot.getInstance().getAllTradePresences().stream()
.forEach(TradePresenceWebSocket::mergePresence);
}
/** Merge trade presence into cache of current entries, returns true if cache was updated. */
private static boolean mergePresence(TradePresenceData tradePresence) {
// Put/replace for this publickey making sure we keep newest timestamp
String pubKey58 = Base58.encode(tradePresence.getPublicKey());
TradePresenceData newEntry = currentEntries.compute(pubKey58, (k, v) -> v == null || v.getTimestamp() < tradePresence.getTimestamp() ? tradePresence : v);
return newEntry == tradePresence;
}
private static void removeOldEntries() {
long now = NTP.getTime();
currentEntries.values().removeIf(v -> v.getTimestamp() < now);
}
}

View File

@@ -53,7 +53,8 @@ public class ArbitraryDataFile {
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFile.class);
public static final long MAX_FILE_SIZE = 500 * 1024 * 1024; // 500MiB
public static final int CHUNK_SIZE = 1 * 1024 * 1024; // 1MiB
protected static final int MAX_CHUNK_SIZE = 1 * 1024 * 1024; // 1MiB
public static final int CHUNK_SIZE = 512 * 1024; // 0.5MiB
public static int SHORT_DIGEST_LENGTH = 8;
protected Path filePath;
@@ -72,7 +73,6 @@ public class ArbitraryDataFile {
}
public ArbitraryDataFile(String hash58, byte[] signature) throws DataException {
this.createDataDirectory();
this.filePath = ArbitraryDataFile.getOutputFilePath(hash58, signature, false);
this.chunks = new ArrayList<>();
this.hash58 = hash58;
@@ -96,7 +96,7 @@ public class ArbitraryDataFile {
this.filePath = outputFilePath;
// Verify hash
if (!this.hash58.equals(this.digest58())) {
LOGGER.error("Hash {} does not match file digest {}", this.hash58, this.digest58());
LOGGER.error("Hash {} does not match file digest {} for signature: {}", this.hash58, this.digest58(), Base58.encode(signature));
this.delete();
throw new DataException("Data file digest validation failed");
}
@@ -110,6 +110,9 @@ public class ArbitraryDataFile {
}
public static ArbitraryDataFile fromHash(byte[] hash, byte[] signature) throws DataException {
if (hash == null) {
return null;
}
return ArbitraryDataFile.fromHash58(Base58.encode(hash), signature);
}
@@ -146,19 +149,6 @@ public class ArbitraryDataFile {
return ArbitraryDataFile.fromPath(Paths.get(file.getPath()), signature);
}
private boolean createDataDirectory() {
// Create the data directory if it doesn't exist
String dataPath = Settings.getInstance().getDataPath();
Path dataDirectory = Paths.get(dataPath);
try {
Files.createDirectories(dataDirectory);
} catch (IOException e) {
LOGGER.error("Unable to create data directory");
return false;
}
return true;
}
private Path copyToDataDirectory(Path sourcePath, byte[] signature) throws DataException {
if (this.hash58 == null || this.filePath == null) {
return null;
@@ -488,6 +478,14 @@ public class ArbitraryDataFile {
// Read the metadata
List<byte[]> chunks = metadata.getChunks();
// If the chunks array is empty, then this resource has no chunks,
// so we must return false to avoid confusing the caller.
if (chunks.isEmpty()) {
return false;
}
// Otherwise, we need to check each chunk individually
for (byte[] chunkHash : chunks) {
ArbitraryDataFileChunk chunk = ArbitraryDataFileChunk.fromHash(chunkHash, this.signature);
if (!chunk.exists()) {
@@ -786,6 +784,10 @@ public class ArbitraryDataFile {
this.loadMetadata();
}
public ArbitraryDataTransactionMetadata getMetadata() {
return this.metadata;
}
@Override
public String toString() {
return this.shortHash58();

View File

@@ -40,8 +40,8 @@ public class ArbitraryDataFileChunk extends ArbitraryDataFile {
try {
// Validate the file size (chunks have stricter limits)
long fileSize = Files.size(this.filePath);
if (fileSize > CHUNK_SIZE) {
LOGGER.error(String.format("DataFileChunk is too large: %d bytes (max chunk size: %d bytes)", fileSize, CHUNK_SIZE));
if (fileSize > MAX_CHUNK_SIZE) {
LOGGER.error(String.format("DataFileChunk is too large: %d bytes (max chunk size: %d bytes)", fileSize, MAX_CHUNK_SIZE));
return ValidationResult.FILE_TOO_LARGE;
}

View File

@@ -34,6 +34,7 @@ public class ArbitraryDataRenderer {
private final String resourceId;
private final ResourceIdType resourceIdType;
private final Service service;
private String theme = "light";
private String inPath;
private final String secret58;
private final String prefix;
@@ -77,7 +78,7 @@ public class ArbitraryDataRenderer {
// If async is requested, show a loading screen whilst build is in progress
if (async) {
arbitraryDataReader.loadAsynchronously(false, 10);
return this.getLoadingResponse(service, resourceId);
return this.getLoadingResponse(service, resourceId, theme);
}
// Otherwise, loop until we have data
@@ -119,7 +120,7 @@ public class ArbitraryDataRenderer {
byte[] data = Files.readAllBytes(Paths.get(filePath)); // TODO: limit file size that can be read into memory
HTMLParser htmlParser = new HTMLParser(resourceId, inPath, prefix, usePrefix, data);
htmlParser.addAdditionalHeaderTags();
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline'; media-src 'self' blob:");
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob:");
response.setContentType(context.getMimeType(filename));
response.setContentLength(htmlParser.getData().length);
response.getOutputStream().write(htmlParser.getData());
@@ -128,7 +129,7 @@ public class ArbitraryDataRenderer {
// Regular file - can be streamed directly
File file = new File(filePath);
FileInputStream inputStream = new FileInputStream(file);
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline'; media-src 'self' blob:");
response.addHeader("Content-Security-Policy", "default-src 'self'");
response.setContentType(context.getMimeType(filename));
int bytesRead, length = 0;
byte[] buffer = new byte[10240];
@@ -171,7 +172,7 @@ public class ArbitraryDataRenderer {
return userPath;
}
private HttpServletResponse getLoadingResponse(Service service, String name) {
private HttpServletResponse getLoadingResponse(Service service, String name, String theme) {
String responseString = "";
URL url = Resources.getResource("loading/index.html");
try {
@@ -180,6 +181,7 @@ public class ArbitraryDataRenderer {
// Replace vars
responseString = responseString.replace("%%SERVICE%%", service.toString());
responseString = responseString.replace("%%NAME%%", name);
responseString = responseString.replace("%%THEME%%", theme);
} catch (IOException e) {
LOGGER.info("Unable to show loading screen: {}", e.getMessage());
@@ -210,4 +212,8 @@ public class ArbitraryDataRenderer {
return indexFiles;
}
public void setTheme(String theme) {
this.theme = theme;
}
}

View File

@@ -3,6 +3,7 @@ package org.qortal.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile.ResourceIdType;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.arbitrary.ArbitraryDataBuildManager;
import org.qortal.controller.arbitrary.ArbitraryDataManager;
@@ -37,6 +38,7 @@ public class ArbitraryDataResource {
private List<ArbitraryTransactionData> transactions;
private ArbitraryTransactionData latestPutTransaction;
private ArbitraryTransactionData latestTransaction;
private int layerCount;
private Integer localChunkCount = null;
private Integer totalChunkCount = null;
@@ -105,6 +107,33 @@ public class ArbitraryDataResource {
return new ArbitraryResourceStatus(Status.DOWNLOADED, this.localChunkCount, this.totalChunkCount);
}
public ArbitraryDataTransactionMetadata getLatestTransactionMetadata() {
this.fetchLatestTransaction();
if (latestTransaction != null) {
byte[] signature = latestTransaction.getSignature();
byte[] metadataHash = latestTransaction.getMetadataHash();
if (metadataHash == null) {
// This resource doesn't have metadata
return null;
}
try {
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
if (metadataFile.exists()) {
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
transactionMetadata.read();
return transactionMetadata;
}
} catch (DataException | IOException e) {
// Do nothing
}
}
return null;
}
public boolean delete() {
try {
this.fetchTransactions();
@@ -306,6 +335,32 @@ public class ArbitraryDataResource {
this.transactions = transactionDataList;
this.layerCount = transactionDataList.size();
} catch (DataException e) {
LOGGER.info(String.format("Repository error when fetching transactions for resource %s: %s", this, e.getMessage()));
}
}
private void fetchLatestTransaction() {
if (this.latestTransaction != null) {
// Already fetched
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Get the most recent transaction
ArbitraryTransactionData latestTransaction = repository.getArbitraryRepository()
.getLatestTransaction(this.resourceId, this.service, null, this.identifier);
if (latestTransaction == null) {
String message = String.format("Couldn't find transaction for name %s, service %s and identifier %s",
this.resourceId, this.service, this.identifierString());
throw new DataException(message);
}
this.latestTransaction = latestTransaction;
} catch (DataException e) {
LOGGER.info(String.format("Repository error when fetching latest transaction for resource %s: %s", this, e.getMessage()));
}
}

View File

@@ -6,8 +6,9 @@ import org.qortal.arbitrary.exception.MissingDataException;
import org.qortal.arbitrary.ArbitraryDataFile.ResourceIdType;
import org.qortal.arbitrary.ArbitraryDataDiff.*;
import org.qortal.arbitrary.metadata.ArbitraryDataMetadataPatch;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Category;
import org.qortal.arbitrary.misc.Service;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.PaymentData;
import org.qortal.data.transaction.ArbitraryTransactionData;
@@ -27,6 +28,7 @@ import java.io.IOException;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.Random;
public class ArbitraryDataTransactionBuilder {
@@ -51,13 +53,20 @@ public class ArbitraryDataTransactionBuilder {
private final String identifier;
private final Repository repository;
// Metadata
private final String title;
private final String description;
private final List<String> tags;
private final Category category;
private int chunkSize = ArbitraryDataFile.CHUNK_SIZE;
private ArbitraryTransactionData arbitraryTransactionData;
private ArbitraryDataFile arbitraryDataFile;
public ArbitraryDataTransactionBuilder(Repository repository, String publicKey58, Path path, String name,
Method method, Service service, String identifier) {
Method method, Service service, String identifier,
String title, String description, List<String> tags, Category category) {
this.repository = repository;
this.publicKey58 = publicKey58;
this.path = path;
@@ -70,6 +79,12 @@ public class ArbitraryDataTransactionBuilder {
identifier = null;
}
this.identifier = identifier;
// Metadata (optional)
this.title = ArbitraryDataTransactionMetadata.limitTitle(title);
this.description = ArbitraryDataTransactionMetadata.limitDescription(description);
this.tags = ArbitraryDataTransactionMetadata.limitTags(tags);
this.category = category;
}
public void build() throws DataException {
@@ -108,6 +123,10 @@ public class ArbitraryDataTransactionBuilder {
return Method.PUT;
}
// Get existing metadata and see if it matches the new metadata
ArbitraryDataResource resource = new ArbitraryDataResource(this.name, ResourceIdType.NAME, this.service, this.identifier);
ArbitraryDataTransactionMetadata existingMetadata = resource.getLatestTransactionMetadata();
try {
// Check layer count
int layerCount = reader.getLayerCount();
@@ -118,7 +137,23 @@ public class ArbitraryDataTransactionBuilder {
// Check size of differences between this layer and previous layer
ArbitraryDataCreatePatch patch = new ArbitraryDataCreatePatch(reader.getFilePath(), this.path, reader.getLatestSignature());
patch.create();
try {
patch.create();
}
catch (DataException | IOException e) {
// Handle matching states separately, as it's best to block transactions with duplicate states
if (e.getMessage().equals("Current state matches previous state. Nothing to do.")) {
// Only throw an exception if the metadata is also identical, as well as the data
if (this.isMetadataEqual(existingMetadata)) {
throw new DataException(e.getMessage());
}
}
LOGGER.info("Caught exception when creating patch: {}", e.getMessage());
LOGGER.info("Unable to load existing resource - using PUT to overwrite it.");
return Method.PUT;
}
long diffSize = FilesystemUtils.getDirectorySize(patch.getFinalPath());
long existingStateSize = FilesystemUtils.getDirectorySize(reader.getFilePath());
double difference = (double) diffSize / (double) existingStateSize;
@@ -155,11 +190,8 @@ public class ArbitraryDataTransactionBuilder {
// State is appropriate for a PATCH transaction
return Method.PATCH;
}
catch (IOException | DataException e) {
// Handle matching states separately, as it's best to block transactions with duplicate states
if (e.getMessage().equals("Current state matches previous state. Nothing to do.")) {
throw new DataException(e.getMessage());
}
catch (IOException e) {
// IMPORTANT: Don't catch DataException here, as they must be passed to the caller
LOGGER.info("Caught exception: {}", e.getMessage());
LOGGER.info("Unable to load existing resource - using PUT to overwrite it.");
return Method.PUT;
@@ -200,7 +232,8 @@ public class ArbitraryDataTransactionBuilder {
// FUTURE? Use zip compression for directories, or no compression for single files
// Compression compression = (path.toFile().isDirectory()) ? Compression.ZIP : Compression.NONE;
ArbitraryDataWriter arbitraryDataWriter = new ArbitraryDataWriter(path, name, service, identifier, method, compression);
ArbitraryDataWriter arbitraryDataWriter = new ArbitraryDataWriter(path, name, service, identifier, method,
compression, title, description, tags, category);
try {
arbitraryDataWriter.setChunkSize(this.chunkSize);
arbitraryDataWriter.save();
@@ -253,6 +286,22 @@ public class ArbitraryDataTransactionBuilder {
}
private boolean isMetadataEqual(ArbitraryDataTransactionMetadata existingMetadata) {
if (!Objects.equals(existingMetadata.getTitle(), this.title)) {
return false;
}
if (!Objects.equals(existingMetadata.getDescription(), this.description)) {
return false;
}
if (!Objects.equals(existingMetadata.getCategory(), this.category)) {
return false;
}
if (!Objects.equals(existingMetadata.getTags(), this.tags)) {
return false;
}
return true;
}
public void computeNonce() throws DataException {
if (this.arbitraryTransactionData == null) {
throw new DataException("Arbitrary transaction data is required to compute nonce");

View File

@@ -5,6 +5,7 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.exception.MissingDataException;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Category;
import org.qortal.arbitrary.misc.Service;
import org.qortal.crypto.Crypto;
import org.qortal.data.transaction.ArbitraryTransactionData.*;
@@ -28,6 +29,10 @@ import java.nio.file.Paths;
import java.security.InvalidAlgorithmParameterException;
import java.security.InvalidKeyException;
import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Objects;
public class ArbitraryDataWriter {
@@ -40,6 +45,12 @@ public class ArbitraryDataWriter {
private final Method method;
private final Compression compression;
// Metadata
private final String title;
private final String description;
private final List<String> tags;
private final Category category;
private int chunkSize = ArbitraryDataFile.CHUNK_SIZE;
private SecretKey aesKey;
@@ -50,7 +61,8 @@ public class ArbitraryDataWriter {
private Path compressedPath;
private Path encryptedPath;
public ArbitraryDataWriter(Path filePath, String name, Service service, String identifier, Method method, Compression compression) {
public ArbitraryDataWriter(Path filePath, String name, Service service, String identifier, Method method, Compression compression,
String title, String description, List<String> tags, Category category) {
this.filePath = filePath;
this.name = name;
this.service = service;
@@ -62,6 +74,12 @@ public class ArbitraryDataWriter {
identifier = null;
}
this.identifier = identifier;
// Metadata (optional)
this.title = ArbitraryDataTransactionMetadata.limitTitle(title);
this.description = ArbitraryDataTransactionMetadata.limitDescription(description);
this.tags = ArbitraryDataTransactionMetadata.limitTags(tags);
this.category = category;
}
public void save() throws IOException, DataException, InterruptedException, MissingDataException {
@@ -258,12 +276,16 @@ public class ArbitraryDataWriter {
private void createMetadataFile() throws IOException, DataException {
// If we have at least one chunk, we need to create an index file containing their hashes
if (this.arbitraryDataFile.chunkCount() > 1) {
if (this.needsMetadataFile()) {
// Create the JSON file
Path chunkFilePath = Paths.get(this.workingPath.toString(), "metadata.json");
ArbitraryDataTransactionMetadata chunkMetadata = new ArbitraryDataTransactionMetadata(chunkFilePath);
chunkMetadata.setChunks(this.arbitraryDataFile.chunkHashList());
chunkMetadata.write();
ArbitraryDataTransactionMetadata metadata = new ArbitraryDataTransactionMetadata(chunkFilePath);
metadata.setTitle(this.title);
metadata.setDescription(this.description);
metadata.setTags(this.tags);
metadata.setCategory(this.category);
metadata.setChunks(this.arbitraryDataFile.chunkHashList());
metadata.write();
// Create an ArbitraryDataFile from the JSON file (we don't have a signature yet)
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromPath(chunkFilePath, null);
@@ -308,6 +330,20 @@ public class ArbitraryDataWriter {
throw new DataException(String.format("Missing chunk %s in metadata file", Base58.encode(chunk)));
}
}
// Check that the metadata is correct
if (!Objects.equals(metadata.getTitle(), this.title)) {
throw new DataException("Metadata mismatch: title");
}
if (!Objects.equals(metadata.getDescription(), this.description)) {
throw new DataException("Metadata mismatch: description");
}
if (!Objects.equals(metadata.getTags(), this.tags)) {
throw new DataException("Metadata mismatch: tags");
}
if (!Objects.equals(metadata.getCategory(), this.category)) {
throw new DataException("Metadata mismatch: category");
}
}
}
@@ -330,6 +366,16 @@ public class ArbitraryDataWriter {
}
}
private boolean needsMetadataFile() {
if (this.arbitraryDataFile.chunkCount() > 1) {
return true;
}
if (this.title != null || this.description != null || this.tags != null || this.category != null) {
return true;
}
return false;
}
public ArbitraryDataFile getArbitraryDataFile() {
return this.arbitraryDataFile;

View File

@@ -2,17 +2,28 @@ package org.qortal.arbitrary.metadata;
import org.json.JSONArray;
import org.json.JSONObject;
import org.qortal.arbitrary.misc.Category;
import org.qortal.repository.DataException;
import org.qortal.utils.Base58;
import java.nio.file.Path;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Iterator;
import java.util.List;
public class ArbitraryDataTransactionMetadata extends ArbitraryDataMetadata {
private List<byte[]> chunks;
private String title;
private String description;
private List<String> tags;
private Category category;
private static int MAX_TITLE_LENGTH = 80;
private static int MAX_DESCRIPTION_LENGTH = 500;
private static int MAX_TAG_LENGTH = 20;
private static int MAX_TAGS_COUNT = 5;
public ArbitraryDataTransactionMetadata(Path filePath) {
super(filePath);
@@ -25,10 +36,37 @@ public class ArbitraryDataTransactionMetadata extends ArbitraryDataMetadata {
throw new DataException("Transaction metadata JSON string is null");
}
JSONObject metadata = new JSONObject(this.jsonString);
if (metadata.has("title")) {
this.title = metadata.getString("title");
}
if (metadata.has("description")) {
this.description = metadata.getString("description");
}
List<String> tagsList = new ArrayList<>();
if (metadata.has("tags")) {
JSONArray tags = metadata.getJSONArray("tags");
if (tags != null) {
for (int i=0; i<tags.length(); i++) {
String tag = tags.getString(i);
if (tag != null) {
tagsList.add(tag);
}
}
}
this.tags = tagsList;
}
if (metadata.has("category")) {
this.category = Category.uncategorizedValueOf(metadata.getString("category"));
}
List<byte[]> chunksList = new ArrayList<>();
JSONObject cache = new JSONObject(this.jsonString);
if (cache.has("chunks")) {
JSONArray chunks = cache.getJSONArray("chunks");
if (metadata.has("chunks")) {
JSONArray chunks = metadata.getJSONArray("chunks");
if (chunks != null) {
for (int i=0; i<chunks.length(); i++) {
String chunk = chunks.getString(i);
@@ -45,6 +83,26 @@ public class ArbitraryDataTransactionMetadata extends ArbitraryDataMetadata {
protected void buildJson() {
JSONObject outer = new JSONObject();
if (this.title != null && !this.title.isEmpty()) {
outer.put("title", this.title);
}
if (this.description != null && !this.description.isEmpty()) {
outer.put("description", this.description);
}
JSONArray tags = new JSONArray();
if (this.tags != null) {
for (String tag : this.tags) {
tags.put(tag);
}
outer.put("tags", tags);
}
if (this.category != null) {
outer.put("category", this.category.toString());
}
JSONArray chunks = new JSONArray();
if (this.chunks != null) {
for (byte[] chunk : this.chunks) {
@@ -66,6 +124,38 @@ public class ArbitraryDataTransactionMetadata extends ArbitraryDataMetadata {
return this.chunks;
}
public void setTitle(String title) {
this.title = title;
}
public String getTitle() {
return this.title;
}
public void setDescription(String description) {
this.description = description;
}
public String getDescription() {
return this.description;
}
public void setTags(List<String> tags) {
this.tags = tags;
}
public List<String> getTags() {
return this.tags;
}
public void setCategory(Category category) {
this.category = category;
}
public Category getCategory() {
return this.category;
}
public boolean containsChunk(byte[] chunk) {
for (byte[] c : this.chunks) {
if (Arrays.equals(c, chunk)) {
@@ -75,4 +165,61 @@ public class ArbitraryDataTransactionMetadata extends ArbitraryDataMetadata {
return false;
}
// Static helper methods
public static String limitTitle(String title) {
if (title == null) {
return null;
}
if (title.isEmpty()) {
return null;
}
return title.substring(0, Math.min(title.length(), MAX_TITLE_LENGTH));
}
public static String limitDescription(String description) {
if (description == null) {
return null;
}
if (description.isEmpty()) {
return null;
}
return description.substring(0, Math.min(description.length(), MAX_DESCRIPTION_LENGTH));
}
public static List<String> limitTags(List<String> tags) {
if (tags == null) {
return null;
}
// Ensure tags list is mutable
List<String> mutableTags = new ArrayList<>(tags);
int tagCount = mutableTags.size();
if (tagCount == 0) {
return null;
}
// Remove tags over the limit
// This is cleaner than truncating, which results in malformed tags
// Also remove tags that are empty
Iterator iterator = mutableTags.iterator();
while (iterator.hasNext()) {
String tag = (String) iterator.next();
if (tag == null || tag.length() > MAX_TAG_LENGTH || tag.isEmpty()) {
iterator.remove();
}
}
// Limit the total number of tags
if (tagCount > MAX_TAGS_COUNT) {
mutableTags = mutableTags.subList(0, MAX_TAGS_COUNT);
}
return mutableTags;
}
}

View File

@@ -0,0 +1,81 @@
package org.qortal.arbitrary.misc;
public enum Category {
ART("Art and Design"),
AUTOMOTIVE("Automotive"),
BEAUTY("Beauty"),
BOOKS("Books and Reference"),
BUSINESS("Business"),
COMMUNICATIONS("Communications"),
CRYPTOCURRENCY("Cryptocurrency and Blockchain"),
CULTURE("Culture"),
DATING("Dating"),
DESIGN("Design"),
ENTERTAINMENT("Entertainment"),
EVENTS("Events"),
FAITH("Faith and Religion"),
FASHION("Fashion"),
FINANCE("Finance"),
FOOD("Food and Drink"),
GAMING("Gaming"),
GEOGRAPHY("Geography"),
HEALTH("Health"),
HISTORY("History"),
HOME("Home"),
KNOWLEDGE("Knowledge Share"),
LANGUAGE("Language"),
LIFESTYLE("Lifestyle"),
MANUFACTURING("Manufacturing"),
MAPS("Maps and Navigation"),
MUSIC("Music"),
NEWS("News"),
OTHER("Other"),
PETS("Pets"),
PHILOSOPHY("Philosophy"),
PHOTOGRAPHY("Photography"),
POLITICS("Politics"),
PRODUCE("Products and Services"),
PRODUCTIVITY("Productivity"),
PSYCHOLOGY("Psychology"),
QORTAL("Qortal"),
SCIENCE("Science"),
SELF_CARE("Self Care"),
SELF_SUFFICIENCY("Self-Sufficiency and Homesteading"),
SHOPPING("Shopping"),
SOCIAL("Social"),
SOFTWARE("Software"),
SPIRITUALITY("Spirituality"),
SPORTS("Sports"),
STORYTELLING("Storytelling"),
TECHNOLOGY("Technology"),
TOOLS("Tools"),
TRAVEL("Travel"),
UNCATEGORIZED("Uncategorized"),
VIDEO("Video"),
WEATHER("Weather");
private final String name;
Category(String name) {
this.name = name;
}
public String getName() {
return this.name;
}
/**
* Same as valueOf() but with fallback to UNCATEGORIZED if there's no match
* @param name
* @return a Category (using UNCATEGORIZED if no match found)
*/
public static Category uncategorizedValueOf(String name) {
try {
return Category.valueOf(name);
}
catch (IllegalArgumentException e) {
return Category.UNCATEGORIZED;
}
}
}

View File

@@ -551,7 +551,7 @@ public class QortalATAPI extends API {
* <p>
* Otherwise, assume B is a public key.
*/
private Account getAccountFromB(MachineState state) {
/*package*/ Account getAccountFromB(MachineState state) {
byte[] bBytes = this.getB(state);
if ((bBytes[0] == Crypto.ADDRESS_VERSION || bBytes[0] == Crypto.AT_ADDRESS_VERSION)

View File

@@ -10,9 +10,11 @@ import org.ciyam.at.ExecutionException;
import org.ciyam.at.FunctionData;
import org.ciyam.at.IllegalFunctionCodeException;
import org.ciyam.at.MachineState;
import org.qortal.account.Account;
import org.qortal.crosschain.Bitcoin;
import org.qortal.crypto.Crypto;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.settings.Settings;
/**
@@ -160,6 +162,68 @@ public enum QortalFunctionCode {
protected void postCheckExecute(FunctionData functionData, MachineState state, short rawFunctionCode) throws ExecutionException {
convertAddressInB(Crypto.ADDRESS_VERSION, state);
}
},
/**
* Returns account level of account in B.<br>
* <tt>0x0520</tt><br>
* B should contain either Qortal address or public key,<br>
* e.g. as a result of calling function {@link org.ciyam.at.FunctionCode#PUT_ADDRESS_FROM_TX_IN_A_INTO_B}</code>.
* <p></p>
* Returns account level, or -1 if account unknown.
* <p></p>
* @see QortalATAPI#getAccountFromB(MachineState)
*/
GET_ACCOUNT_LEVEL_FROM_ACCOUNT_IN_B(0x0520, 0, true) {
@Override
protected void postCheckExecute(FunctionData functionData, MachineState state, short rawFunctionCode) throws ExecutionException {
QortalATAPI api = (QortalATAPI) state.getAPI();
Account account = api.getAccountFromB(state);
Integer accountLevel = null;
if (account != null) {
try {
accountLevel = account.getLevel();
} catch (DataException e) {
throw new RuntimeException("AT API unable to fetch account level?", e);
}
}
functionData.returnValue = accountLevel != null
? accountLevel.longValue()
: -1;
}
},
/**
* Returns account's minted block count of account in B.<br>
* <tt>0x0521</tt><br>
* B should contain either Qortal address or public key,<br>
* e.g. as a result of calling function {@link org.ciyam.at.FunctionCode#PUT_ADDRESS_FROM_TX_IN_A_INTO_B}</code>.
* <p></p>
* Returns account level, or -1 if account unknown.
* <p></p>
* @see QortalATAPI#getAccountFromB(MachineState)
*/
GET_BLOCKS_MINTED_FROM_ACCOUNT_IN_B(0x0521, 0, true) {
@Override
protected void postCheckExecute(FunctionData functionData, MachineState state, short rawFunctionCode) throws ExecutionException {
QortalATAPI api = (QortalATAPI) state.getAPI();
Account account = api.getAccountFromB(state);
Integer blocksMinted = null;
if (account != null) {
try {
blocksMinted = account.getBlocksMinted();
} catch (DataException e) {
throw new RuntimeException("AT API unable to fetch account's minted block count?", e);
}
}
functionData.returnValue = blocksMinted != null
? blocksMinted.longValue()
: -1;
}
};
public final short value;

View File

@@ -8,13 +8,7 @@ import java.math.BigInteger;
import java.math.RoundingMode;
import java.text.DecimalFormat;
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.Level;
@@ -28,7 +22,7 @@ import org.qortal.asset.Asset;
import org.qortal.at.AT;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.block.BlockChain.AccountLevelShareBin;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
@@ -320,7 +314,7 @@ public class Block {
byte[] reference = parentBlockData.getSignature();
// Fetch our list of online accounts
List<OnlineAccountData> onlineAccounts = Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> onlineAccounts = OnlineAccountsManager.getInstance().getOnlineAccounts();
if (onlineAccounts.isEmpty()) {
LOGGER.error("No online accounts - not even our own?");
return null;
@@ -333,6 +327,11 @@ public class Block {
onlineAccountsTimestamp = onlineAccountData.getTimestamp();
}
// Load sorted list of reward share public keys into memory, so that the indexes can be obtained.
// This is up to 100x faster than querying each index separately. For 4150 reward share keys, it
// was taking around 5000ms to query individually, vs 50ms using this approach.
List<byte[]> allRewardSharePublicKeys = repository.getAccountRepository().getRewardSharePublicKeys();
// Map using index into sorted list of reward-shares as key
Map<Integer, OnlineAccountData> indexedOnlineAccounts = new HashMap<>();
for (OnlineAccountData onlineAccountData : onlineAccounts) {
@@ -340,7 +339,7 @@ public class Block {
if (onlineAccountData.getTimestamp() != onlineAccountsTimestamp)
continue;
Integer accountIndex = repository.getAccountRepository().getRewardShareIndex(onlineAccountData.getPublicKey());
Integer accountIndex = getRewardShareIndex(onlineAccountData.getPublicKey(), allRewardSharePublicKeys);
if (accountIndex == null)
// Online account (reward-share) with current timestamp but reward-share cancelled
continue;
@@ -988,10 +987,10 @@ public class Block {
byte[] onlineTimestampBytes = Longs.toByteArray(onlineTimestamp);
// If this block is much older than current online timestamp, then there's no point checking current online accounts
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - Controller.ONLINE_TIMESTAMP_MODULUS
List<OnlineAccountData> currentOnlineAccounts = onlineTimestamp < NTP.getTime() - OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS
? null
: Controller.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = Controller.getInstance().getLatestBlocksOnlineAccounts();
: OnlineAccountsManager.getInstance().getOnlineAccounts();
List<OnlineAccountData> latestBlocksOnlineAccounts = OnlineAccountsManager.getInstance().getLatestBlocksOnlineAccounts();
// Extract online accounts' timestamp signatures from block data
List<byte[]> onlineAccountsSignatures = BlockTransformer.decodeTimestampSignatures(this.blockData.getOnlineAccountsSignatures());
@@ -1369,7 +1368,7 @@ public class Block {
postBlockTidy();
// Give Controller our cached, valid online accounts data (if any) to help reduce CPU load for next block
Controller.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
OnlineAccountsManager.getInstance().pushLatestBlocksOnlineAccounts(this.cachedValidOnlineAccounts);
// Log some debugging info relating to the block weight calculation
this.logDebugInfo();
@@ -1588,7 +1587,7 @@ public class Block {
postBlockTidy();
// Remove any cached, valid online accounts data from Controller
Controller.getInstance().popLatestBlocksOnlineAccounts();
OnlineAccountsManager.getInstance().popLatestBlocksOnlineAccounts();
}
protected void orphanTransactionsFromBlock() throws DataException {
@@ -2029,6 +2028,26 @@ public class Block {
this.repository.getAccountRepository().tidy();
}
// Utils
/**
* Find index of rewardSharePublicKey in list of rewardSharePublicKeys
*
* @param rewardSharePublicKey - the key to query
* @param rewardSharePublicKeys - a sorted list of keys
* @return - the index of the key, or null if not found
*/
private static Integer getRewardShareIndex(byte[] rewardSharePublicKey, List<byte[]> rewardSharePublicKeys) {
int index = 0;
for (byte[] publicKey : rewardSharePublicKeys) {
if (Arrays.equals(rewardSharePublicKey, publicKey)) {
return index;
}
index++;
}
return null;
}
private void logDebugInfo() {
try {
// Avoid calculations if possible. We have to check against INFO here, since Level.isMoreSpecificThan() confusingly uses <= rather than just <

View File

@@ -73,9 +73,13 @@ public class BlockChain {
}
// Custom transaction fees
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long nameRegistrationUnitFee;
private long nameRegistrationUnitFeeTimestamp;
/** Unit fees by transaction timestamp */
public static class UnitFeesByTimestamp {
public long timestamp;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
public long fee;
}
private List<UnitFeesByTimestamp> nameRegistrationUnitFees;
/** Map of which blockchain features are enabled when (height/timestamp) */
@XmlJavaTypeAdapter(StringLongMapXmlAdapter.class)
@@ -306,16 +310,6 @@ public class BlockChain {
return this.maxBlockSize;
}
// Custom transaction fees
public long getNameRegistrationUnitFee() {
return this.nameRegistrationUnitFee;
}
public long getNameRegistrationUnitFeeTimestamp() {
// FUTURE: we could use a separate structure to indicate fee adjustments for different transaction types
return this.nameRegistrationUnitFeeTimestamp;
}
/** Returns true if approval-needing transaction types require a txGroupId other than NO_GROUP. */
public boolean getRequireGroupForApproval() {
return this.requireGroupForApproval;
@@ -430,6 +424,16 @@ public class BlockChain {
throw new IllegalStateException(String.format("No block timing info available for height %d", ourHeight));
}
public long getNameRegistrationUnitFeeAtTimestamp(long ourTimestamp) {
// Scan through for reward at our height
for (int i = 0; i < nameRegistrationUnitFees.size(); ++i)
if (ourTimestamp >= nameRegistrationUnitFees.get(i).timestamp)
return nameRegistrationUnitFees.get(i).fee;
// Default to system-wide unit fee
return this.getUnitFee();
}
/** Validate blockchain config read from JSON */
private void validateConfig() {
if (this.genesisInfo == null)

View File

@@ -110,7 +110,7 @@ public class BlockMinter extends Thread {
continue;
// No online accounts? (e.g. during startup)
if (Controller.getInstance().getOnlineAccounts().isEmpty())
if (OnlineAccountsManager.getInstance().getOnlineAccounts().isEmpty())
continue;
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
@@ -148,7 +148,8 @@ public class BlockMinter extends Thread {
}
}
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
@@ -478,7 +479,7 @@ public class BlockMinter extends Thread {
throw new DataException("Ignoring attempt to mint testing block for non-test chain!");
// Ensure mintingAccount is 'online' so blocks can be minted
Controller.getInstance().ensureTestingAccountsOnline(mintingAndOnlineAccounts);
OnlineAccountsManager.getInstance().ensureTestingAccountsOnline(mintingAndOnlineAccounts);
PrivateKeyAccount mintingAccount = mintingAndOnlineAccounts[0];
@@ -544,7 +545,7 @@ public class BlockMinter extends Thread {
}
NumberFormat formatter = new DecimalFormat("0.###E0");
List<Peer> peers = Network.getInstance().getHandshakedPeers();
List<Peer> peers = Network.getInstance().getImmutableHandshakedPeers();
// Loop through handshaked peers and check for any new block candidates
for (Peer peer : peers) {
if (peer.getCommonBlockData() != null && peer.getCommonBlockData().getCommonBlockSummary() != null) {

View File

@@ -29,10 +29,6 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
import com.google.common.primitives.Longs;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.api.ApiService;
import org.qortal.api.DomainMapService;
import org.qortal.api.GatewayService;
@@ -43,11 +39,8 @@ import org.qortal.controller.arbitrary.*;
import org.qortal.controller.repository.PruneManager;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.data.network.PeerChainTipData;
import org.qortal.data.network.PeerData;
import org.qortal.data.transaction.ChatTransactionData;
@@ -65,7 +58,6 @@ import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.*;
public class Controller extends Thread {
@@ -88,25 +80,6 @@ public class Controller extends Thread {
private static final long NTP_PRE_SYNC_CHECK_PERIOD = 5 * 1000L; // ms
private static final long NTP_POST_SYNC_CHECK_PERIOD = 5 * 60 * 1000L; // ms
private static final long DELETE_EXPIRED_INTERVAL = 5 * 60 * 1000L; // ms
private static final int MAX_INCOMING_TRANSACTIONS = 5000;
/** Minimum time before considering an invalid unconfirmed transaction as "stale" */
public static final long INVALID_TRANSACTION_STALE_TIMEOUT = 30 * 60 * 1000L; // ms
/** Minimum frequency to re-request stale unconfirmed transactions from peers, to recheck validity */
public static final long INVALID_TRANSACTION_RECHECK_INTERVAL = 60 * 60 * 1000L; // ms\
/** Minimum frequency to re-request expired unconfirmed transactions from peers, to recheck validity
* This mainly exists to stop expired transactions from bloating the list */
public static final long EXPIRED_TRANSACTION_RECHECK_INTERVAL = 10 * 60 * 1000L; // ms
// To do with online accounts list
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 1 * 60 * 1000L; // ms
public static final long ONLINE_TIMESTAMP_MODULUS = 5 * 60 * 1000L;
private static final long LAST_SEEN_EXPIRY_PERIOD = (ONLINE_TIMESTAMP_MODULUS * 2) + (1 * 60 * 1000L);
/** How many (latest) blocks' worth of online accounts we cache */
private static final int MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS = 2;
private static final long ONLINE_ACCOUNTS_V2_PEER_VERSION = 0x0300020000L;
private static volatile boolean isStopping = false;
private static BlockMinter blockMinter = null;
@@ -138,25 +111,12 @@ public class Controller extends Thread {
private long ntpCheckTimestamp = startTime; // ms
private long deleteExpiredTimestamp = startTime + DELETE_EXPIRED_INTERVAL; // ms
private long onlineAccountsTasksTimestamp = startTime + ONLINE_ACCOUNTS_TASKS_INTERVAL; // ms
/** Whether we can mint new blocks, as reported by BlockMinter. */
private volatile boolean isMintingPossible = false;
/** List of incoming transaction that are in the import queue */
private List<TransactionData> incomingTransactions = Collections.synchronizedList(new ArrayList<>());
/** List of recent invalid unconfirmed transactions */
private Map<String, Long> invalidUnconfirmedTransactions = Collections.synchronizedMap(new HashMap<>());
/** Lock for only allowing one blockchain-modifying codepath at a time. e.g. synchronization or newly minted block. */
private final ReentrantLock blockchainLock = new ReentrantLock();
/** Cache of current 'online accounts' */
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
/** Cache of latest blocks' online accounts */
Deque<List<OnlineAccountData>> latestBlocksOnlineAccounts = new ArrayDeque<>(MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS);
// Stats
@XmlAccessorType(XmlAccessType.FIELD)
public static class StatsSnapshot {
@@ -209,6 +169,15 @@ public class Controller extends Thread {
}
public GetArbitraryDataFileListMessageStats getArbitraryDataFileListMessageStats = new GetArbitraryDataFileListMessageStats();
public static class GetArbitraryMetadataMessageStats {
public AtomicLong requests = new AtomicLong();
public AtomicLong unknownFiles = new AtomicLong();
public GetArbitraryMetadataMessageStats() {
}
}
public GetArbitraryMetadataMessageStats getArbitraryMetadataMessageStats = new GetArbitraryMetadataMessageStats();
public AtomicLong latestBlocksCacheRefills = new AtomicLong();
public StatsSnapshot() {
@@ -460,6 +429,12 @@ public class Controller extends Thread {
ArbitraryDataStorageManager.getInstance().start();
ArbitraryDataRenderManager.getInstance().start();
LOGGER.info("Starting online accounts manager");
OnlineAccountsManager.getInstance().start();
LOGGER.info("Starting transaction importer");
TransactionImporter.getInstance().start();
// Auto-update service?
if (Settings.getInstance().isAutoUpdateEnabled()) {
LOGGER.info("Starting auto-update");
@@ -557,11 +532,6 @@ public class Controller extends Thread {
}
}
// Process incoming transactions queue
processIncomingTransactionsQueue();
// Clean up invalid incoming transactions list
cleanupInvalidTransactionsList(now);
// Clean up arbitrary data request cache
ArbitraryDataManager.getInstance().cleanupRequestCache(now);
// Clean up arbitrary data queues and lists
@@ -630,12 +600,6 @@ public class Controller extends Thread {
deleteExpiredTimestamp = now + DELETE_EXPIRED_INTERVAL;
deleteExpiredTransactions();
}
// Perform tasks to do with managing online accounts list
if (now >= onlineAccountsTasksTimestamp) {
onlineAccountsTasksTimestamp = now + ONLINE_ACCOUNTS_TASKS_INTERVAL;
performOnlineAccountsTasks();
}
}
} catch (InterruptedException e) {
// Clear interrupted flag so we can shutdown trim threads
@@ -691,6 +655,29 @@ public class Controller extends Thread {
return lastMisbehaved != null && lastMisbehaved > NTP.getTime() - MISBEHAVIOUR_COOLOFF;
};
/** True if peer has unknown height, lower height or same height and same block signature (unless we don't have their block signature). */
public static Predicate<Peer> hasShorterBlockchain = peer -> {
BlockData highestBlockData = getInstance().getChainTip();
int ourHeight = highestBlockData.getHeight();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
// Ensure we have chain tip data for this peer
if (peerChainTipData == null)
return true;
// Remove if peer is at a lower height than us
Integer peerHeight = peerChainTipData.getLastHeight();
if (peerHeight == null || peerHeight < ourHeight)
return true;
// Don't remove if peer is on a greater height chain than us, or if we don't have their block signature
if (peerHeight > ourHeight || peerChainTipData.getLastBlockSignature() == null)
return false;
// Remove if signatures match
return Arrays.equals(peerChainTipData.getLastBlockSignature(), highestBlockData.getSignature());
};
public static final Predicate<Peer> hasNoRecentBlock = peer -> {
final Long minLatestBlockTimestamp = getMinimumLatestBlockTimestamp();
final PeerChainTipData peerChainTipData = peer.getChainTipData();
@@ -711,7 +698,7 @@ public class Controller extends Thread {
public static final Predicate<Peer> hasInferiorChainTip = peer -> {
final PeerChainTipData peerChainTipData = peer.getChainTipData();
final List<ByteArray> inferiorChainTips = Synchronizer.getInstance().inferiorChainSignatures;
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || inferiorChainTips.contains(new ByteArray(peerChainTipData.getLastBlockSignature()));
return peerChainTipData == null || peerChainTipData.getLastBlockSignature() == null || inferiorChainTips.contains(ByteArray.wrap(peerChainTipData.getLastBlockSignature()));
};
public static final Predicate<Peer> hasOldVersion = peer -> {
@@ -753,7 +740,7 @@ public class Controller extends Thread {
return;
}
final int numberOfPeers = Network.getInstance().getHandshakedPeers().size();
final int numberOfPeers = Network.getInstance().getImmutableHandshakedPeers().size();
final int height = getChainHeight();
@@ -762,6 +749,10 @@ public class Controller extends Thread {
String actionText;
// Use a more tolerant latest block timestamp in the isUpToDate() calls below to reduce misleading statuses.
// Any block in the last 30 minutes is considered "up to date" for the purposes of displaying statuses.
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
synchronized (Synchronizer.getInstance().syncLock) {
if (this.isMintingPossible) {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_ENABLED");
@@ -771,10 +762,14 @@ public class Controller extends Thread {
actionText = Translator.INSTANCE.translate("SysTray", "CONNECTING");
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate()) {
else if (!this.isUpToDate(minLatestBlockTimestamp) && Synchronizer.getInstance().isSynchronizing()) {
actionText = String.format("%s - %d%%", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"), Synchronizer.getInstance().getSyncPercent());
SysTray.getInstance().setTrayIcon(3);
}
else if (!this.isUpToDate(minLatestBlockTimestamp)) {
actionText = String.format("%s", Translator.INSTANCE.translate("SysTray", "SYNCHRONIZING_BLOCKCHAIN"));
SysTray.getInstance().setTrayIcon(3);
}
else {
actionText = Translator.INSTANCE.translate("SysTray", "MINTING_DISABLED");
SysTray.getInstance().setTrayIcon(4);
@@ -824,120 +819,6 @@ public class Controller extends Thread {
}
}
// Incoming transactions queue
private boolean incomingTransactionQueueContains(byte[] signature) {
synchronized (incomingTransactions) {
return incomingTransactions.stream().anyMatch(t -> Arrays.equals(t.getSignature(), signature));
}
}
private void removeIncomingTransaction(byte[] signature) {
incomingTransactions.removeIf(t -> Arrays.equals(t.getSignature(), signature));
}
private void processIncomingTransactionsQueue() {
if (this.incomingTransactions.size() == 0) {
// Don't bother locking if there are no new transactions to process
return;
}
if (Synchronizer.getInstance().isSyncRequested() || Synchronizer.getInstance().isSynchronizing()) {
// Prioritize syncing, and don't attempt to lock
return;
}
try {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(2, TimeUnit.SECONDS)) {
LOGGER.trace(() -> String.format("Too busy to process incoming transactions queue"));
return;
}
} catch (InterruptedException e) {
LOGGER.info("Interrupted when trying to acquire blockchain lock");
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
LOGGER.debug("Processing incoming transactions queue (size {})...", this.incomingTransactions.size());
// Take a copy of incomingTransactions so we can release the lock
List<TransactionData>incomingTransactionsCopy = new ArrayList<>(this.incomingTransactions);
// Iterate through incoming transactions list
Iterator iterator = incomingTransactionsCopy.iterator();
while (iterator.hasNext()) {
if (isStopping) {
return;
}
if (Synchronizer.getInstance().isSyncRequestPending()) {
LOGGER.debug("Breaking out of transaction processing loop with {} remaining, because a sync request is pending", incomingTransactionsCopy.size());
return;
}
TransactionData transactionData = (TransactionData) iterator.next();
Transaction transaction = Transaction.fromData(repository, transactionData);
// Check signature
if (!transaction.isSignatureValid()) {
LOGGER.trace(() -> String.format("Ignoring %s transaction %s with invalid signature", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
ValidationResult validationResult = transaction.importAsUnconfirmed();
if (validationResult == ValidationResult.TRANSACTION_ALREADY_EXISTS) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s", Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
if (validationResult == ValidationResult.NO_BLOCKCHAIN_LOCK) {
LOGGER.trace(() -> String.format("Couldn't lock blockchain to import unconfirmed transaction", Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
continue;
}
if (validationResult != ValidationResult.OK) {
final String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace(() -> String.format("Ignoring invalid (%s) %s transaction %s", validationResult.name(), transactionData.getType().name(), signature58));
Long now = NTP.getTime();
if (now != null && now - transactionData.getTimestamp() > INVALID_TRANSACTION_STALE_TIMEOUT) {
Long expiryLength = INVALID_TRANSACTION_RECHECK_INTERVAL;
if (validationResult == ValidationResult.TIMESTAMP_TOO_OLD) {
// Use shorter recheck interval for expired transactions
expiryLength = EXPIRED_TRANSACTION_RECHECK_INTERVAL;
}
Long expiry = now + expiryLength;
LOGGER.debug("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Invalid, unconfirmed transaction has become stale - add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
removeIncomingTransaction(transactionData.getSignature());
continue;
}
LOGGER.debug(() -> String.format("Imported %s transaction %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
removeIncomingTransaction(transactionData.getSignature());
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing incoming transactions", e));
} finally {
LOGGER.debug("Finished processing incoming transactions queue");
blockchainLock.unlock();
}
}
private void cleanupInvalidTransactionsList(Long now) {
if (now == null) {
return;
}
// Periodically remove invalid unconfirmed transactions from the list, so that they can be fetched again
invalidUnconfirmedTransactions.entrySet().removeIf(entry -> entry.getValue() == null || entry.getValue() < now);
}
// Shutdown
@@ -966,6 +847,12 @@ public class Controller extends Thread {
ArbitraryDataStorageManager.getInstance().shutdown();
ArbitraryDataRenderManager.getInstance().shutdown();
LOGGER.info("Shutting down online accounts manager");
OnlineAccountsManager.getInstance().shutdown();
LOGGER.info("Shutting down transaction importer");
TransactionImporter.getInstance().shutdown();
if (blockMinter != null) {
LOGGER.info("Shutting down block minter");
blockMinter.shutdown();
@@ -1248,10 +1135,6 @@ public class Controller extends Thread {
onNetworkGetBlockMessage(peer, message);
break;
case TRANSACTION:
onNetworkTransactionMessage(peer, message);
break;
case GET_BLOCK_SUMMARIES:
onNetworkGetBlockSummariesMessage(peer, message);
break;
@@ -1265,31 +1148,35 @@ public class Controller extends Thread {
break;
case GET_TRANSACTION:
onNetworkGetTransactionMessage(peer, message);
TransactionImporter.getInstance().onNetworkGetTransactionMessage(peer, message);
break;
case TRANSACTION:
TransactionImporter.getInstance().onNetworkTransactionMessage(peer, message);
break;
case GET_UNCONFIRMED_TRANSACTIONS:
onNetworkGetUnconfirmedTransactionsMessage(peer, message);
TransactionImporter.getInstance().onNetworkGetUnconfirmedTransactionsMessage(peer, message);
break;
case TRANSACTION_SIGNATURES:
onNetworkTransactionSignaturesMessage(peer, message);
TransactionImporter.getInstance().onNetworkTransactionSignaturesMessage(peer, message);
break;
case GET_ONLINE_ACCOUNTS:
onNetworkGetOnlineAccountsMessage(peer, message);
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsMessage(peer, message);
break;
case ONLINE_ACCOUNTS:
onNetworkOnlineAccountsMessage(peer, message);
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsMessage(peer, message);
break;
case GET_ONLINE_ACCOUNTS_V2:
onNetworkGetOnlineAccountsV2Message(peer, message);
OnlineAccountsManager.getInstance().onNetworkGetOnlineAccountsV2Message(peer, message);
break;
case ONLINE_ACCOUNTS_V2:
onNetworkOnlineAccountsV2Message(peer, message);
OnlineAccountsManager.getInstance().onNetworkOnlineAccountsV2Message(peer, message);
break;
case GET_ARBITRARY_DATA:
@@ -1309,7 +1196,23 @@ public class Controller extends Thread {
break;
case ARBITRARY_SIGNATURES:
ArbitraryDataManager.getInstance().onNetworkArbitrarySignaturesMessage(peer, message);
// Not currently supported
break;
case GET_ARBITRARY_METADATA:
ArbitraryMetadataManager.getInstance().onNetworkGetArbitraryMetadataMessage(peer, message);
break;
case ARBITRARY_METADATA:
ArbitraryMetadataManager.getInstance().onNetworkArbitraryMetadataMessage(peer, message);
break;
case GET_TRADE_PRESENCES:
TradeBot.getInstance().onGetTradePresencesMessage(peer, message);
break;
case TRADE_PRESENCES:
TradeBot.getInstance().onTradePresencesMessage(peer, message);
break;
default:
@@ -1323,7 +1226,7 @@ public class Controller extends Thread {
byte[] signature = getBlockMessage.getSignature();
this.stats.getBlockMessageStats.requests.incrementAndGet();
ByteArray signatureAsByteArray = new ByteArray(signature);
ByteArray signatureAsByteArray = ByteArray.wrap(signature);
CachedBlockMessage cachedBlockMessage = this.blockMessageCache.get(signatureAsByteArray);
int blockCacheSize = Settings.getInstance().getBlockCacheSize();
@@ -1403,23 +1306,13 @@ public class Controller extends Thread {
if (getChainHeight() - blockData.getHeight() <= blockCacheSize) {
this.stats.getBlockMessageStats.cacheFills.incrementAndGet();
this.blockMessageCache.put(new ByteArray(blockData.getSignature()), blockMessage);
this.blockMessageCache.put(ByteArray.wrap(blockData.getSignature()), blockMessage);
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while send block %s to peer %s", Base58.encode(signature), peer), e);
}
}
private void onNetworkTransactionMessage(Peer peer, Message message) {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
if (this.incomingTransactions.size() < MAX_INCOMING_TRANSACTIONS) {
if (!this.incomingTransactions.contains(transactionData)) {
this.incomingTransactions.add(transactionData);
}
}
}
private void onNetworkGetBlockSummariesMessage(Peer peer, Message message) {
GetBlockSummariesMessage getBlockSummariesMessage = (GetBlockSummariesMessage) message;
final byte[] parentSignature = getBlockSummariesMessage.getParentSignature();
@@ -1571,449 +1464,17 @@ public class Controller extends Thread {
Synchronizer.getInstance().requestSync();
}
private void onNetworkGetTransactionMessage(Peer peer, Message message) {
GetTransactionMessage getTransactionMessage = (GetTransactionMessage) message;
byte[] signature = getTransactionMessage.getSignature();
try (final Repository repository = RepositoryManager.getRepository()) {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (transactionData == null) {
LOGGER.debug(() -> String.format("Ignoring GET_TRANSACTION request from peer %s for unknown transaction %s", peer, Base58.encode(signature)));
// Send no response at all???
return;
}
Message transactionMessage = new TransactionMessage(transactionData);
transactionMessage.setId(message.getId());
if (!peer.sendMessage(transactionMessage))
peer.disconnect("failed to send transaction");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while send transaction %s to peer %s", Base58.encode(signature), peer), e);
}
}
private void onNetworkGetUnconfirmedTransactionsMessage(Peer peer, Message message) {
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = Collections.emptyList();
// If we're NOT up-to-date then don't send out unconfirmed transactions
// as it's possible they are already included in a later block that we don't have.
if (isUpToDate())
signatures = repository.getTransactionRepository().getUnconfirmedTransactionSignatures();
Message transactionSignaturesMessage = new TransactionSignaturesMessage(signatures);
if (!peer.sendMessage(transactionSignaturesMessage))
peer.disconnect("failed to send unconfirmed transaction signatures");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending unconfirmed transaction signatures to peer %s", peer), e);
}
}
private void onNetworkTransactionSignaturesMessage(Peer peer, Message message) {
TransactionSignaturesMessage transactionSignaturesMessage = (TransactionSignaturesMessage) message;
List<byte[]> signatures = transactionSignaturesMessage.getSignatures();
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
String signature58 = Base58.encode(signature);
if (invalidUnconfirmedTransactions.containsKey(signature58)) {
// Previously invalid transaction - don't keep requesting it
// It will be periodically removed from invalidUnconfirmedTransactions to allow for rechecks
continue;
}
// Ignore if this transaction is in the queue
if (incomingTransactionQueueContains(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing queued transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Do we have it already? (Before requesting transaction data itself)
if (repository.getTransactionRepository().exists(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Check isInterrupted() here and exit fast
if (Thread.currentThread().isInterrupted())
return;
// Fetch actual transaction data from peer
Message getTransactionMessage = new GetTransactionMessage(signature);
if (!peer.sendMessage(getTransactionMessage)) {
peer.disconnect("failed to request transaction");
return;
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing unconfirmed transactions from peer %s", peer), e);
}
}
private void onNetworkGetOnlineAccountsMessage(Peer peer, Message message) {
GetOnlineAccountsMessage getOnlineAccountsMessage = (GetOnlineAccountsMessage) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsMessage(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
private void onNetworkOnlineAccountsMessage(Peer peer, Message message) {
OnlineAccountsMessage onlineAccountsMessage = (OnlineAccountsMessage) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
private void onNetworkGetOnlineAccountsV2Message(Peer peer, Message message) {
GetOnlineAccountsV2Message getOnlineAccountsMessage = (GetOnlineAccountsV2Message) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsV2Message(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
private void onNetworkOnlineAccountsV2Message(Peer peer, Message message) {
OnlineAccountsV2Message onlineAccountsMessage = (OnlineAccountsV2Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
// Utilities
private void verifyAndAddAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
final Long now = NTP.getTime();
if (now == null)
return;
PublicKeyAccount otherAccount = new PublicKeyAccount(repository, onlineAccountData.getPublicKey());
// Check timestamp is 'recent' here
if (Math.abs(onlineAccountData.getTimestamp() - now) > ONLINE_TIMESTAMP_MODULUS * 2) {
LOGGER.trace(() -> String.format("Rejecting online account %s with out of range timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
return;
}
// Verify
byte[] data = Longs.toByteArray(onlineAccountData.getTimestamp());
if (!otherAccount.verify(onlineAccountData.getSignature(), data)) {
LOGGER.trace(() -> String.format("Rejecting invalid online account %s", otherAccount.getAddress()));
return;
}
// Qortal: check online account is actually reward-share
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(onlineAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(onlineAccountData.getPublicKey())));
return;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
return;
}
synchronized (this.onlineAccounts) {
OnlineAccountData existingAccountData = this.onlineAccounts.stream().filter(account -> Arrays.equals(account.getPublicKey(), onlineAccountData.getPublicKey())).findFirst().orElse(null);
if (existingAccountData != null) {
if (existingAccountData.getTimestamp() < onlineAccountData.getTimestamp()) {
this.onlineAccounts.remove(existingAccountData);
LOGGER.trace(() -> String.format("Updated online account %s with timestamp %d (was %d)", otherAccount.getAddress(), onlineAccountData.getTimestamp(), existingAccountData.getTimestamp()));
} else {
LOGGER.trace(() -> String.format("Not updating existing online account %s", otherAccount.getAddress()));
return;
}
} else {
LOGGER.trace(() -> String.format("Added online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
}
this.onlineAccounts.add(onlineAccountData);
}
}
public void ensureTestingAccountsOnline(PrivateKeyAccount... onlineAccounts) {
if (!BlockChain.getInstance().isTestChain()) {
LOGGER.warn("Ignoring attempt to ensure test account is online for non-test chain!");
return;
}
final Long now = NTP.getTime();
if (now == null)
return;
final long onlineAccountsTimestamp = Controller.toOnlineAccountTimestamp(now);
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
synchronized (this.onlineAccounts) {
this.onlineAccounts.clear();
for (PrivateKeyAccount onlineAccount : onlineAccounts) {
// Check mintingAccount is actually reward-share?
byte[] signature = onlineAccount.sign(timestampBytes);
byte[] publicKey = onlineAccount.getPublicKey();
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
this.onlineAccounts.add(ourOnlineAccountData);
}
}
}
private void performOnlineAccountsTasks() {
final Long now = NTP.getTime();
if (now == null)
return;
// Expire old entries
final long cutoffThreshold = now - LAST_SEEN_EXPIRY_PERIOD;
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
if (onlineAccountData.getTimestamp() < cutoffThreshold) {
iterator.remove();
LOGGER.trace(() -> {
PublicKeyAccount otherAccount = new PublicKeyAccount(null, onlineAccountData.getPublicKey());
return String.format("Removed expired online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp());
});
}
}
}
// Request data from other peers?
if ((this.onlineAccountsTasksTimestamp % ONLINE_ACCOUNTS_BROADCAST_INTERVAL) < ONLINE_ACCOUNTS_TASKS_INTERVAL) {
List<OnlineAccountData> safeOnlineAccounts;
synchronized (this.onlineAccounts) {
safeOnlineAccounts = new ArrayList<>(this.onlineAccounts);
}
Message messageV1 = new GetOnlineAccountsMessage(safeOnlineAccounts);
Message messageV2 = new GetOnlineAccountsV2Message(safeOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
}
// Refresh our online accounts signatures?
sendOurOnlineAccountsInfo();
}
private void sendOurOnlineAccountsInfo() {
final Long now = NTP.getTime();
if (now != null) {
List<MintingAccountData> mintingAccounts;
try (final Repository repository = RepositoryManager.getRepository()) {
mintingAccounts = repository.getAccountRepository().getMintingAccounts();
// We have no accounts, but don't reset timestamp
if (mintingAccounts.isEmpty())
return;
// Only reward-share accounts allowed
Iterator<MintingAccountData> iterator = mintingAccounts.iterator();
int i = 0;
while (iterator.hasNext()) {
MintingAccountData mintingAccountData = iterator.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
iterator.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
iterator.remove();
continue;
}
if (++i > 2) {
iterator.remove();
continue;
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to fetch minting accounts: %s", e.getMessage()));
return;
}
// 'current' timestamp
final long onlineAccountsTimestamp = Controller.toOnlineAccountTimestamp(now);
boolean hasInfoChanged = false;
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
MINTING_ACCOUNTS:
for (MintingAccountData mintingAccountData : mintingAccounts) {
PrivateKeyAccount mintingAccount = new PrivateKeyAccount(null, mintingAccountData.getPrivateKey());
byte[] signature = mintingAccount.sign(timestampBytes);
byte[] publicKey = mintingAccount.getPublicKey();
// Our account is online
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData existingOnlineAccountData = iterator.next();
if (Arrays.equals(existingOnlineAccountData.getPublicKey(), ourOnlineAccountData.getPublicKey())) {
// If our online account is already present, with same timestamp, then move on to next mintingAccount
if (existingOnlineAccountData.getTimestamp() == onlineAccountsTimestamp)
continue MINTING_ACCOUNTS;
// If our online account is already present, but with older timestamp, then remove it
iterator.remove();
break;
}
}
this.onlineAccounts.add(ourOnlineAccountData);
}
LOGGER.trace(() -> String.format("Added our online account %s with timestamp %d", mintingAccount.getAddress(), onlineAccountsTimestamp));
ourOnlineAccounts.add(ourOnlineAccountData);
hasInfoChanged = true;
}
if (!hasInfoChanged)
return;
Message messageV1 = new OnlineAccountsMessage(ourOnlineAccounts);
Message messageV2 = new OnlineAccountsV2Message(ourOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
LOGGER.trace(() -> String.format("Broadcasted %d online account%s with timestamp %d", ourOnlineAccounts.size(), (ourOnlineAccounts.size() != 1 ? "s" : ""), onlineAccountsTimestamp));
}
}
public static long toOnlineAccountTimestamp(long timestamp) {
return (timestamp / ONLINE_TIMESTAMP_MODULUS) * ONLINE_TIMESTAMP_MODULUS;
}
/** Returns list of online accounts with timestamp recent enough to be considered currently online. */
public List<OnlineAccountData> getOnlineAccounts() {
final long onlineTimestamp = Controller.toOnlineAccountTimestamp(NTP.getTime());
synchronized (this.onlineAccounts) {
return this.onlineAccounts.stream().filter(account -> account.getTimestamp() == onlineTimestamp).collect(Collectors.toList());
}
}
/** Returns cached, unmodifiable list of latest block's online accounts. */
public List<OnlineAccountData> getLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
return this.latestBlocksOnlineAccounts.peekFirst();
}
}
/** Caches list of latest block's online accounts. Typically called by Block.process() */
public void pushLatestBlocksOnlineAccounts(List<OnlineAccountData> latestBlocksOnlineAccounts) {
synchronized (this.latestBlocksOnlineAccounts) {
if (this.latestBlocksOnlineAccounts.size() == MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS)
this.latestBlocksOnlineAccounts.pollLast();
this.latestBlocksOnlineAccounts.addFirst(latestBlocksOnlineAccounts == null
? Collections.emptyList()
: Collections.unmodifiableList(latestBlocksOnlineAccounts));
}
}
/** Reverts list of latest block's online accounts. Typically called by Block.orphan() */
public void popLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
this.latestBlocksOnlineAccounts.pollFirst();
}
}
/** Returns a list of peers that are not misbehaving, and have a recent block. */
public List<Peer> getRecentBehavingPeers() {
final Long minLatestBlockTimestamp = getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return null;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Filter out unsuitable peers
Iterator<Peer> iterator = peers.iterator();
@@ -2062,7 +1523,8 @@ public class Controller extends Thread {
if (latestBlockData == null || latestBlockData.getTimestamp() < minLatestBlockTimestamp)
return false;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
if (peers == null)
return false;

View File

@@ -0,0 +1,524 @@
package org.qortal.controller;
import com.google.common.primitives.Longs;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.Account;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.*;
import java.util.stream.Collectors;
public class OnlineAccountsManager extends Thread {
private class OurOnlineAccountsThread extends Thread {
public void run() {
try {
while (!isStopping) {
Thread.sleep(10000L);
// Refresh our online accounts signatures?
sendOurOnlineAccountsInfo();
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
}
}
private static final Logger LOGGER = LogManager.getLogger(OnlineAccountsManager.class);
private static OnlineAccountsManager instance;
private volatile boolean isStopping = false;
// To do with online accounts list
private static final long ONLINE_ACCOUNTS_TASKS_INTERVAL = 10 * 1000L; // ms
private static final long ONLINE_ACCOUNTS_BROADCAST_INTERVAL = 1 * 60 * 1000L; // ms
public static final long ONLINE_TIMESTAMP_MODULUS = 5 * 60 * 1000L;
private static final long LAST_SEEN_EXPIRY_PERIOD = (ONLINE_TIMESTAMP_MODULUS * 2) + (1 * 60 * 1000L);
/** How many (latest) blocks' worth of online accounts we cache */
private static final int MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS = 2;
private static final long ONLINE_ACCOUNTS_V2_PEER_VERSION = 0x0300020000L;
private long onlineAccountsTasksTimestamp = Controller.startTime + ONLINE_ACCOUNTS_TASKS_INTERVAL; // ms
private final List<OnlineAccountData> onlineAccountsImportQueue = Collections.synchronizedList(new ArrayList<>());
/** Cache of current 'online accounts' */
List<OnlineAccountData> onlineAccounts = new ArrayList<>();
/** Cache of latest blocks' online accounts */
Deque<List<OnlineAccountData>> latestBlocksOnlineAccounts = new ArrayDeque<>(MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS);
public OnlineAccountsManager() {
}
public static synchronized OnlineAccountsManager getInstance() {
if (instance == null) {
instance = new OnlineAccountsManager();
}
return instance;
}
public void run() {
// Start separate thread to prepare our online accounts
// This could be converted to a thread pool later if more concurrency is needed
OurOnlineAccountsThread ourOnlineAccountsThread = new OurOnlineAccountsThread();
ourOnlineAccountsThread.start();
try {
while (!Controller.isStopping()) {
Thread.sleep(100L);
final Long now = NTP.getTime();
if (now == null) {
continue;
}
// Perform tasks to do with managing online accounts list
if (now >= onlineAccountsTasksTimestamp) {
onlineAccountsTasksTimestamp = now + ONLINE_ACCOUNTS_TASKS_INTERVAL;
performOnlineAccountsTasks();
}
// Process queued online account verifications
this.processOnlineAccountsImportQueue();
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
ourOnlineAccountsThread.interrupt();
}
public void shutdown() {
isStopping = true;
this.interrupt();
}
// Online accounts import queue
private void processOnlineAccountsImportQueue() {
if (this.onlineAccountsImportQueue.isEmpty()) {
// Nothing to do
return;
}
LOGGER.debug("Processing online accounts import queue (size: {})", this.onlineAccountsImportQueue.size());
try (final Repository repository = RepositoryManager.getRepository()) {
List<OnlineAccountData> onlineAccountDataCopy = new ArrayList<>(this.onlineAccountsImportQueue);
for (OnlineAccountData onlineAccountData : onlineAccountDataCopy) {
if (isStopping) {
return;
}
this.verifyAndAddAccount(repository, onlineAccountData);
// Remove from queue
onlineAccountsImportQueue.remove(onlineAccountData);
}
LOGGER.debug("Finished processing online accounts import queue");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts"), e);
}
}
// Utilities
private void verifyAndAddAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
final Long now = NTP.getTime();
if (now == null)
return;
PublicKeyAccount otherAccount = new PublicKeyAccount(repository, onlineAccountData.getPublicKey());
// Check timestamp is 'recent' here
if (Math.abs(onlineAccountData.getTimestamp() - now) > ONLINE_TIMESTAMP_MODULUS * 2) {
LOGGER.trace(() -> String.format("Rejecting online account %s with out of range timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
return;
}
// Verify
byte[] data = Longs.toByteArray(onlineAccountData.getTimestamp());
if (!otherAccount.verify(onlineAccountData.getSignature(), data)) {
LOGGER.trace(() -> String.format("Rejecting invalid online account %s", otherAccount.getAddress()));
return;
}
// Qortal: check online account is actually reward-share
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(onlineAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(onlineAccountData.getPublicKey())));
return;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
return;
}
synchronized (this.onlineAccounts) {
OnlineAccountData existingAccountData = this.onlineAccounts.stream().filter(account -> Arrays.equals(account.getPublicKey(), onlineAccountData.getPublicKey())).findFirst().orElse(null);
if (existingAccountData != null) {
if (existingAccountData.getTimestamp() < onlineAccountData.getTimestamp()) {
this.onlineAccounts.remove(existingAccountData);
LOGGER.trace(() -> String.format("Updated online account %s with timestamp %d (was %d)", otherAccount.getAddress(), onlineAccountData.getTimestamp(), existingAccountData.getTimestamp()));
} else {
LOGGER.trace(() -> String.format("Not updating existing online account %s", otherAccount.getAddress()));
return;
}
} else {
LOGGER.trace(() -> String.format("Added online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp()));
}
this.onlineAccounts.add(onlineAccountData);
}
}
public void ensureTestingAccountsOnline(PrivateKeyAccount... onlineAccounts) {
if (!BlockChain.getInstance().isTestChain()) {
LOGGER.warn("Ignoring attempt to ensure test account is online for non-test chain!");
return;
}
final Long now = NTP.getTime();
if (now == null)
return;
final long onlineAccountsTimestamp = toOnlineAccountTimestamp(now);
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
synchronized (this.onlineAccounts) {
this.onlineAccounts.clear();
for (PrivateKeyAccount onlineAccount : onlineAccounts) {
// Check mintingAccount is actually reward-share?
byte[] signature = onlineAccount.sign(timestampBytes);
byte[] publicKey = onlineAccount.getPublicKey();
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
this.onlineAccounts.add(ourOnlineAccountData);
}
}
}
private void performOnlineAccountsTasks() {
final Long now = NTP.getTime();
if (now == null)
return;
// Expire old entries
final long cutoffThreshold = now - LAST_SEEN_EXPIRY_PERIOD;
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
if (onlineAccountData.getTimestamp() < cutoffThreshold) {
iterator.remove();
LOGGER.trace(() -> {
PublicKeyAccount otherAccount = new PublicKeyAccount(null, onlineAccountData.getPublicKey());
return String.format("Removed expired online account %s with timestamp %d", otherAccount.getAddress(), onlineAccountData.getTimestamp());
});
}
}
}
// Request data from other peers?
if ((this.onlineAccountsTasksTimestamp % ONLINE_ACCOUNTS_BROADCAST_INTERVAL) < ONLINE_ACCOUNTS_TASKS_INTERVAL) {
List<OnlineAccountData> safeOnlineAccounts;
synchronized (this.onlineAccounts) {
safeOnlineAccounts = new ArrayList<>(this.onlineAccounts);
}
Message messageV1 = new GetOnlineAccountsMessage(safeOnlineAccounts);
Message messageV2 = new GetOnlineAccountsV2Message(safeOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
}
}
private void sendOurOnlineAccountsInfo() {
final Long now = NTP.getTime();
if (now == null) {
return;
}
List<MintingAccountData> mintingAccounts;
try (final Repository repository = RepositoryManager.getRepository()) {
mintingAccounts = repository.getAccountRepository().getMintingAccounts();
// We have no accounts, but don't reset timestamp
if (mintingAccounts.isEmpty())
return;
// Only reward-share accounts allowed
Iterator<MintingAccountData> iterator = mintingAccounts.iterator();
int i = 0;
while (iterator.hasNext()) {
MintingAccountData mintingAccountData = iterator.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't even exist - probably not a good sign
iterator.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
iterator.remove();
continue;
}
if (++i > 1+1) {
iterator.remove();
continue;
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to fetch minting accounts: %s", e.getMessage()));
return;
}
// 'current' timestamp
final long onlineAccountsTimestamp = toOnlineAccountTimestamp(now);
boolean hasInfoChanged = false;
byte[] timestampBytes = Longs.toByteArray(onlineAccountsTimestamp);
List<OnlineAccountData> ourOnlineAccounts = new ArrayList<>();
MINTING_ACCOUNTS:
for (MintingAccountData mintingAccountData : mintingAccounts) {
PrivateKeyAccount mintingAccount = new PrivateKeyAccount(null, mintingAccountData.getPrivateKey());
byte[] signature = mintingAccount.sign(timestampBytes);
byte[] publicKey = mintingAccount.getPublicKey();
// Our account is online
OnlineAccountData ourOnlineAccountData = new OnlineAccountData(onlineAccountsTimestamp, signature, publicKey);
synchronized (this.onlineAccounts) {
Iterator<OnlineAccountData> iterator = this.onlineAccounts.iterator();
while (iterator.hasNext()) {
OnlineAccountData existingOnlineAccountData = iterator.next();
if (Arrays.equals(existingOnlineAccountData.getPublicKey(), ourOnlineAccountData.getPublicKey())) {
// If our online account is already present, with same timestamp, then move on to next mintingAccount
if (existingOnlineAccountData.getTimestamp() == onlineAccountsTimestamp)
continue MINTING_ACCOUNTS;
// If our online account is already present, but with older timestamp, then remove it
iterator.remove();
break;
}
}
this.onlineAccounts.add(ourOnlineAccountData);
}
LOGGER.trace(() -> String.format("Added our online account %s with timestamp %d", mintingAccount.getAddress(), onlineAccountsTimestamp));
ourOnlineAccounts.add(ourOnlineAccountData);
hasInfoChanged = true;
}
if (!hasInfoChanged)
return;
Message messageV1 = new OnlineAccountsMessage(ourOnlineAccounts);
Message messageV2 = new OnlineAccountsV2Message(ourOnlineAccounts);
Network.getInstance().broadcast(peer ->
peer.getPeersVersion() >= ONLINE_ACCOUNTS_V2_PEER_VERSION ? messageV2 : messageV1
);
LOGGER.trace(() -> String.format("Broadcasted %d online account%s with timestamp %d", ourOnlineAccounts.size(), (ourOnlineAccounts.size() != 1 ? "s" : ""), onlineAccountsTimestamp));
}
public static long toOnlineAccountTimestamp(long timestamp) {
return (timestamp / ONLINE_TIMESTAMP_MODULUS) * ONLINE_TIMESTAMP_MODULUS;
}
/** Returns list of online accounts with timestamp recent enough to be considered currently online. */
public List<OnlineAccountData> getOnlineAccounts() {
final long onlineTimestamp = toOnlineAccountTimestamp(NTP.getTime());
synchronized (this.onlineAccounts) {
return this.onlineAccounts.stream().filter(account -> account.getTimestamp() == onlineTimestamp).collect(Collectors.toList());
}
}
/** Returns cached, unmodifiable list of latest block's online accounts. */
public List<OnlineAccountData> getLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
return this.latestBlocksOnlineAccounts.peekFirst();
}
}
/** Caches list of latest block's online accounts. Typically called by Block.process() */
public void pushLatestBlocksOnlineAccounts(List<OnlineAccountData> latestBlocksOnlineAccounts) {
synchronized (this.latestBlocksOnlineAccounts) {
if (this.latestBlocksOnlineAccounts.size() == MAX_BLOCKS_CACHED_ONLINE_ACCOUNTS)
this.latestBlocksOnlineAccounts.pollLast();
this.latestBlocksOnlineAccounts.addFirst(latestBlocksOnlineAccounts == null
? Collections.emptyList()
: Collections.unmodifiableList(latestBlocksOnlineAccounts));
}
}
/** Reverts list of latest block's online accounts. Typically called by Block.orphan() */
public void popLatestBlocksOnlineAccounts() {
synchronized (this.latestBlocksOnlineAccounts) {
this.latestBlocksOnlineAccounts.pollFirst();
}
}
// Network handlers
public void onNetworkGetOnlineAccountsMessage(Peer peer, Message message) {
GetOnlineAccountsMessage getOnlineAccountsMessage = (GetOnlineAccountsMessage) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsMessage(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
public void onNetworkOnlineAccountsMessage(Peer peer, Message message) {
OnlineAccountsMessage onlineAccountsMessage = (OnlineAccountsMessage) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.trace(() -> String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
try (final Repository repository = RepositoryManager.getRepository()) {
for (OnlineAccountData onlineAccountData : peersOnlineAccounts)
this.verifyAndAddAccount(repository, onlineAccountData);
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while verifying online accounts from peer %s", peer), e);
}
}
public void onNetworkGetOnlineAccountsV2Message(Peer peer, Message message) {
GetOnlineAccountsV2Message getOnlineAccountsMessage = (GetOnlineAccountsV2Message) message;
List<OnlineAccountData> excludeAccounts = getOnlineAccountsMessage.getOnlineAccounts();
// Send online accounts info, excluding entries with matching timestamp & public key from excludeAccounts
List<OnlineAccountData> accountsToSend;
synchronized (this.onlineAccounts) {
accountsToSend = new ArrayList<>(this.onlineAccounts);
}
Iterator<OnlineAccountData> iterator = accountsToSend.iterator();
SEND_ITERATOR:
while (iterator.hasNext()) {
OnlineAccountData onlineAccountData = iterator.next();
for (int i = 0; i < excludeAccounts.size(); ++i) {
OnlineAccountData excludeAccountData = excludeAccounts.get(i);
if (onlineAccountData.getTimestamp() == excludeAccountData.getTimestamp() && Arrays.equals(onlineAccountData.getPublicKey(), excludeAccountData.getPublicKey())) {
iterator.remove();
continue SEND_ITERATOR;
}
}
}
Message onlineAccountsMessage = new OnlineAccountsV2Message(accountsToSend);
peer.sendMessage(onlineAccountsMessage);
LOGGER.trace(() -> String.format("Sent %d of our %d online accounts to %s", accountsToSend.size(), this.onlineAccounts.size(), peer));
}
public void onNetworkOnlineAccountsV2Message(Peer peer, Message message) {
OnlineAccountsV2Message onlineAccountsMessage = (OnlineAccountsV2Message) message;
List<OnlineAccountData> peersOnlineAccounts = onlineAccountsMessage.getOnlineAccounts();
LOGGER.debug(String.format("Received %d online accounts from %s", peersOnlineAccounts.size(), peer));
int importCount = 0;
// Add any online accounts to the queue that aren't already present
for (OnlineAccountData onlineAccountData : peersOnlineAccounts) {
// Do we already know about this online account data?
if (onlineAccounts.contains(onlineAccountData)) {
continue;
}
// Is it already in the import queue?
if (onlineAccountsImportQueue.contains(onlineAccountData)) {
continue;
}
onlineAccountsImportQueue.add(onlineAccountData);
importCount++;
}
LOGGER.debug(String.format("Added %d online accounts to queue", importCount));
}
}

View File

@@ -95,7 +95,7 @@ public class Synchronizer extends Thread {
private static Synchronizer instance;
public enum SynchronizationResult {
OK, NOTHING_TO_DO, GENESIS_ONLY, NO_COMMON_BLOCK, TOO_DIVERGENT, NO_REPLY, INFERIOR_CHAIN, INVALID_DATA, NO_BLOCKCHAIN_LOCK, REPOSITORY_ISSUE, SHUTTING_DOWN;
OK, NOTHING_TO_DO, GENESIS_ONLY, NO_COMMON_BLOCK, TOO_DIVERGENT, NO_REPLY, INFERIOR_CHAIN, INVALID_DATA, NO_BLOCKCHAIN_LOCK, REPOSITORY_ISSUE, SHUTTING_DOWN, CHAIN_TIP_TOO_OLD;
}
public static class NewChainTipEvent implements Event {
@@ -173,6 +173,12 @@ public class Synchronizer extends Thread {
public Integer getSyncPercent() {
synchronized (this.syncLock) {
// Report as 100% synced if the latest block is within the last 30 mins
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
if (Controller.getInstance().isUpToDate(minLatestBlockTimestamp)) {
return 100;
}
return this.isSynchronizing ? this.syncPercent : null;
}
}
@@ -195,7 +201,8 @@ public class Synchronizer extends Thread {
if (this.isSynchronizing)
return true;
List<Peer> peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
@@ -211,7 +218,8 @@ public class Synchronizer extends Thread {
checkRecoveryModeForPeers(peers);
if (recoveryMode) {
peers = Network.getInstance().getHandshakedPeers();
// Needs a mutable copy of the unmodifiableList
peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
peers.removeIf(Controller.hasOnlyGenesisBlock);
peers.removeIf(Controller.hasMisbehaved);
peers.removeIf(Controller.hasOldVersion);
@@ -227,6 +235,9 @@ public class Synchronizer extends Thread {
// Disregard peers that are on the same block as last sync attempt and we didn't like their chain
peers.removeIf(Controller.hasInferiorChainTip);
// Remove peers with unknown height, lower height or same height and same block signature (unless we don't have their block signature)
peers.removeIf(Controller.hasShorterBlockchain);
final int peersBeforeComparison = peers.size();
// Request recent block summaries from the remaining peers, and locate our common block with each
@@ -238,6 +249,12 @@ public class Synchronizer extends Thread {
// We may have added more inferior chain tips when comparing peers, so remove any peers that are currently on those chains
peers.removeIf(Controller.hasInferiorChainTip);
// Remove any peers that are no longer on a recent block since the last check
// Except for times when we're in recovery mode, in which case we need to keep them
if (!recoveryMode) {
peers.removeIf(Controller.hasNoRecentBlock);
}
final int peersRemoved = peersBeforeComparison - peers.size();
if (peersRemoved > 0 && peers.size() > 0)
LOGGER.debug(String.format("Ignoring %d peers on inferior chains. Peers remaining: %d", peersRemoved, peers.size()));
@@ -300,7 +317,7 @@ public class Synchronizer extends Thread {
case INFERIOR_CHAIN: {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = new ByteArray(peer.getChainTipData().getLastBlockSignature());
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getLastBlockSignature());
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
@@ -316,6 +333,7 @@ public class Synchronizer extends Thread {
case NO_REPLY:
case NO_BLOCKCHAIN_LOCK:
case REPOSITORY_ISSUE:
case CHAIN_TIP_TOO_OLD:
// These are minor failure results so fine to try again
LOGGER.debug(() -> String.format("Failed to synchronize with peer %s (%s)", peer, syncResult.name()));
break;
@@ -328,7 +346,7 @@ public class Synchronizer extends Thread {
// fall-through...
case NOTHING_TO_DO: {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = new ByteArray(peer.getChainTipData().getLastBlockSignature());
ByteArray inferiorChainSignature = ByteArray.wrap(peer.getChainTipData().getLastBlockSignature());
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
@@ -370,7 +388,7 @@ public class Synchronizer extends Thread {
}
private boolean checkRecoveryModeForPeers(List<Peer> qualifiedPeers) {
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
if (handshakedPeers.size() > 0) {
// There is at least one handshaked peer
@@ -404,7 +422,7 @@ public class Synchronizer extends Thread {
public void addInferiorChainSignature(byte[] inferiorSignature) {
// Update our list of inferior chain tips
ByteArray inferiorChainSignature = new ByteArray(inferiorSignature);
ByteArray inferiorChainSignature = ByteArray.wrap(inferiorSignature);
if (!inferiorChainSignatures.contains(inferiorChainSignature))
inferiorChainSignatures.add(inferiorChainSignature);
}
@@ -555,7 +573,7 @@ public class Synchronizer extends Thread {
// If our latest block is very old, it's best that we don't try and determine the best peers to sync to.
// This is because it can involve very large chain comparisons, which is too intensive.
// In reality, most forking problems occur near the chain tips, so we will reserve this functionality for those situations.
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
return peers;
@@ -711,6 +729,7 @@ public class Synchronizer extends Thread {
LOGGER.debug(String.format("Listing peers with common block %.8s...", Base58.encode(commonBlockSummary.getSignature())));
for (Peer peer : peersSharingCommonBlock) {
final int peerHeight = peer.getChainTipData().getLastHeight();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
final int peerAdditionalBlocksAfterCommonBlock = peerHeight - commonBlockSummary.getHeight();
final CommonBlockData peerCommonBlockData = peer.getCommonBlockData();
@@ -721,6 +740,14 @@ public class Synchronizer extends Thread {
continue;
}
// If peer is our of date (since our last check), we should exclude it from this round
minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (peerLastBlockTimestamp == null || peerLastBlockTimestamp < minLatestBlockTimestamp) {
LOGGER.debug(String.format("Peer %s is out of date - removing it from this round", peer));
peers.remove(peer);
continue;
}
final List<BlockSummaryData> peerBlockSummariesAfterCommonBlock = peerCommonBlockData.getBlockSummariesAfterCommonBlock();
populateBlockSummariesMinterLevels(repository, peerBlockSummariesAfterCommonBlock);
@@ -1283,6 +1310,16 @@ public class Synchronizer extends Thread {
return SynchronizationResult.INVALID_DATA;
}
// Final check to make sure the peer isn't out of date (except for when we're in recovery mode)
if (!recoveryMode && peer.getChainTipData() != null) {
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
final Long peerLastBlockTimestamp = peer.getChainTipData().getLastBlockTimestamp();
if (peerLastBlockTimestamp == null || peerLastBlockTimestamp < minLatestBlockTimestamp) {
LOGGER.info(String.format("Peer %s is out of date, so abandoning sync attempt", peer));
return SynchronizationResult.CHAIN_TIP_TOO_OLD;
}
}
byte[] nextPeerSignature = peerBlockSignatures.get(0);
int nextHeight = height + 1;

View File

@@ -0,0 +1,354 @@
package org.qortal.controller;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Peer;
import org.qortal.network.message.GetTransactionMessage;
import org.qortal.network.message.Message;
import org.qortal.network.message.TransactionMessage;
import org.qortal.network.message.TransactionSignaturesMessage;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transaction.Transaction;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;
public class TransactionImporter extends Thread {
private static final Logger LOGGER = LogManager.getLogger(TransactionImporter.class);
private static TransactionImporter instance;
private volatile boolean isStopping = false;
private static final int MAX_INCOMING_TRANSACTIONS = 5000;
/** Minimum time before considering an invalid unconfirmed transaction as "stale" */
public static final long INVALID_TRANSACTION_STALE_TIMEOUT = 30 * 60 * 1000L; // ms
/** Minimum frequency to re-request stale unconfirmed transactions from peers, to recheck validity */
public static final long INVALID_TRANSACTION_RECHECK_INTERVAL = 60 * 60 * 1000L; // ms\
/** Minimum frequency to re-request expired unconfirmed transactions from peers, to recheck validity
* This mainly exists to stop expired transactions from bloating the list */
public static final long EXPIRED_TRANSACTION_RECHECK_INTERVAL = 10 * 60 * 1000L; // ms
/** Map of incoming transaction that are in the import queue. Key is transaction data, value is whether signature has been validated. */
private final Map<TransactionData, Boolean> incomingTransactions = Collections.synchronizedMap(new HashMap<>());
/** Map of recent invalid unconfirmed transactions. Key is base58 transaction signature, value is do-not-request expiry timestamp. */
private final Map<String, Long> invalidUnconfirmedTransactions = Collections.synchronizedMap(new HashMap<>());
public static synchronized TransactionImporter getInstance() {
if (instance == null) {
instance = new TransactionImporter();
}
return instance;
}
@Override
public void run() {
try {
while (!Controller.isStopping()) {
Thread.sleep(1000L);
// Process incoming transactions queue
processIncomingTransactionsQueue();
// Clean up invalid incoming transactions list
cleanupInvalidTransactionsList(NTP.getTime());
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
}
public void shutdown() {
isStopping = true;
this.interrupt();
}
// Incoming transactions queue
private boolean incomingTransactionQueueContains(byte[] signature) {
synchronized (incomingTransactions) {
return incomingTransactions.keySet().stream().anyMatch(t -> Arrays.equals(t.getSignature(), signature));
}
}
private void removeIncomingTransaction(byte[] signature) {
incomingTransactions.keySet().removeIf(t -> Arrays.equals(t.getSignature(), signature));
}
private void processIncomingTransactionsQueue() {
if (this.incomingTransactions.isEmpty()) {
// Nothing to do?
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Take a snapshot of incomingTransactions, so we don't need to lock it while processing
Map<TransactionData, Boolean> incomingTransactionsCopy = Map.copyOf(this.incomingTransactions);
int unvalidatedCount = Collections.frequency(incomingTransactionsCopy.values(), Boolean.FALSE);
int validatedCount = 0;
if (unvalidatedCount > 0) {
LOGGER.debug("Validating signatures in incoming transactions queue (size {})...", unvalidatedCount);
}
List<Transaction> sigValidTransactions = new ArrayList<>();
// Signature validation round - does not require blockchain lock
for (Map.Entry<TransactionData, Boolean> transactionEntry : incomingTransactionsCopy.entrySet()) {
// Quick exit?
if (isStopping) {
return;
}
TransactionData transactionData = transactionEntry.getKey();
Transaction transaction = Transaction.fromData(repository, transactionData);
// Only validate signature if we haven't already done so
Boolean isSigValid = transactionEntry.getValue();
if (!Boolean.TRUE.equals(isSigValid)) {
if (!transaction.isSignatureValid()) {
String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace("Ignoring {} transaction {} with invalid signature", transactionData.getType().name(), signature58);
removeIncomingTransaction(transactionData.getSignature());
// Also add to invalidIncomingTransactions map
Long now = NTP.getTime();
if (now != null) {
Long expiry = now + INVALID_TRANSACTION_RECHECK_INTERVAL;
LOGGER.trace("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
continue;
}
else {
// Count the number that were validated in this round, for logging purposes
validatedCount++;
}
// Add mark signature as valid if transaction still exists in import queue
incomingTransactions.computeIfPresent(transactionData, (k, v) -> Boolean.TRUE);
} else {
LOGGER.trace(() -> String.format("Transaction %s known to have valid signature", Base58.encode(transactionData.getSignature())));
}
// Signature valid - add to shortlist
sigValidTransactions.add(transaction);
}
if (unvalidatedCount > 0) {
LOGGER.debug("Finished validating signatures in incoming transactions queue (valid this round: {}, total pending import: {})...", validatedCount, sigValidTransactions.size());
}
if (sigValidTransactions.isEmpty()) {
// Don't bother locking if there are no new transactions to process
return;
}
if (Synchronizer.getInstance().isSyncRequested() || Synchronizer.getInstance().isSynchronizing()) {
// Prioritize syncing, and don't attempt to lock
// Signature validity is retained in the incomingTransactions map, to avoid the above work being wasted
return;
}
try {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(2, TimeUnit.SECONDS)) {
// Signature validity is retained in the incomingTransactions map, to avoid the above work being wasted
LOGGER.debug("Too busy to process incoming transactions queue");
return;
}
} catch (InterruptedException e) {
LOGGER.debug("Interrupted when trying to acquire blockchain lock");
return;
}
LOGGER.debug("Processing incoming transactions queue (size {})...", sigValidTransactions.size());
// Import transactions with valid signatures
try {
for (int i = 0; i < sigValidTransactions.size(); ++i) {
if (isStopping) {
return;
}
if (Synchronizer.getInstance().isSyncRequestPending()) {
LOGGER.debug("Breaking out of transaction processing with {} remaining, because a sync request is pending", sigValidTransactions.size() - i);
return;
}
Transaction transaction = sigValidTransactions.get(i);
TransactionData transactionData = transaction.getTransactionData();
Transaction.ValidationResult validationResult = transaction.importAsUnconfirmed();
switch (validationResult) {
case TRANSACTION_ALREADY_EXISTS: {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s", Base58.encode(transactionData.getSignature())));
break;
}
case NO_BLOCKCHAIN_LOCK: {
// Is this even possible considering we acquired blockchain lock above?
LOGGER.trace(() -> String.format("Couldn't lock blockchain to import unconfirmed transaction %s", Base58.encode(transactionData.getSignature())));
break;
}
case OK: {
LOGGER.debug(() -> String.format("Imported %s transaction %s", transactionData.getType().name(), Base58.encode(transactionData.getSignature())));
break;
}
// All other invalid cases:
default: {
final String signature58 = Base58.encode(transactionData.getSignature());
LOGGER.trace(() -> String.format("Ignoring invalid (%s) %s transaction %s", validationResult.name(), transactionData.getType().name(), signature58));
Long now = NTP.getTime();
if (now != null && now - transactionData.getTimestamp() > INVALID_TRANSACTION_STALE_TIMEOUT) {
Long expiryLength = INVALID_TRANSACTION_RECHECK_INTERVAL;
if (validationResult == Transaction.ValidationResult.TIMESTAMP_TOO_OLD) {
// Use shorter recheck interval for expired transactions
expiryLength = EXPIRED_TRANSACTION_RECHECK_INTERVAL;
}
Long expiry = now + expiryLength;
LOGGER.trace("Adding stale invalid transaction {} to invalidUnconfirmedTransactions...", signature58);
// Invalid, unconfirmed transaction has become stale - add to invalidUnconfirmedTransactions so that we don't keep requesting it
invalidUnconfirmedTransactions.put(signature58, expiry);
}
}
}
// Transaction has been processed, even if only to reject it
removeIncomingTransaction(transactionData.getSignature());
}
} finally {
LOGGER.debug("Finished processing incoming transactions queue");
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.unlock();
}
} catch (DataException e) {
LOGGER.error("Repository issue while processing incoming transactions", e);
}
}
private void cleanupInvalidTransactionsList(Long now) {
if (now == null) {
return;
}
// Periodically remove invalid unconfirmed transactions from the list, so that they can be fetched again
invalidUnconfirmedTransactions.entrySet().removeIf(entry -> entry.getValue() == null || entry.getValue() < now);
}
// Network handlers
public void onNetworkTransactionMessage(Peer peer, Message message) {
TransactionMessage transactionMessage = (TransactionMessage) message;
TransactionData transactionData = transactionMessage.getTransactionData();
if (this.incomingTransactions.size() < MAX_INCOMING_TRANSACTIONS) {
synchronized (this.incomingTransactions) {
if (!incomingTransactionQueueContains(transactionData.getSignature())) {
this.incomingTransactions.put(transactionData, Boolean.FALSE);
}
}
}
}
public void onNetworkGetTransactionMessage(Peer peer, Message message) {
GetTransactionMessage getTransactionMessage = (GetTransactionMessage) message;
byte[] signature = getTransactionMessage.getSignature();
try (final Repository repository = RepositoryManager.getRepository()) {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (transactionData == null) {
LOGGER.debug(() -> String.format("Ignoring GET_TRANSACTION request from peer %s for unknown transaction %s", peer, Base58.encode(signature)));
// Send no response at all???
return;
}
Message transactionMessage = new TransactionMessage(transactionData);
transactionMessage.setId(message.getId());
if (!peer.sendMessage(transactionMessage))
peer.disconnect("failed to send transaction");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while send transaction %s to peer %s", Base58.encode(signature), peer), e);
}
}
public void onNetworkGetUnconfirmedTransactionsMessage(Peer peer, Message message) {
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = Collections.emptyList();
// If we're NOT up-to-date then don't send out unconfirmed transactions
// as it's possible they are already included in a later block that we don't have.
if (Controller.getInstance().isUpToDate())
signatures = repository.getTransactionRepository().getUnconfirmedTransactionSignatures();
Message transactionSignaturesMessage = new TransactionSignaturesMessage(signatures);
if (!peer.sendMessage(transactionSignaturesMessage))
peer.disconnect("failed to send unconfirmed transaction signatures");
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending unconfirmed transaction signatures to peer %s", peer), e);
}
}
public void onNetworkTransactionSignaturesMessage(Peer peer, Message message) {
TransactionSignaturesMessage transactionSignaturesMessage = (TransactionSignaturesMessage) message;
List<byte[]> signatures = transactionSignaturesMessage.getSignatures();
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
String signature58 = Base58.encode(signature);
if (invalidUnconfirmedTransactions.containsKey(signature58)) {
// Previously invalid transaction - don't keep requesting it
// It will be periodically removed from invalidUnconfirmedTransactions to allow for rechecks
continue;
}
// Ignore if this transaction is in the queue
if (incomingTransactionQueueContains(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing queued transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Do we have it already? (Before requesting transaction data itself)
if (repository.getTransactionRepository().exists(signature)) {
LOGGER.trace(() -> String.format("Ignoring existing transaction %s from peer %s", Base58.encode(signature), peer));
continue;
}
// Check isInterrupted() here and exit fast
if (Thread.currentThread().isInterrupted())
return;
// Fetch actual transaction data from peer
Message getTransactionMessage = new GetTransactionMessage(signature);
if (!peer.sendMessage(getTransactionMessage)) {
peer.disconnect("failed to request transaction");
return;
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing unconfirmed transactions from peer %s", peer), e);
}
}
}

View File

@@ -180,9 +180,6 @@ public class ArbitraryDataCleanupManager extends Thread {
arbitraryTransactionData.getName(), Base58.encode(signature)));
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
// We should also remove peers for this transaction from the lookup table to save space
this.removePeersHostingTransactionData(repository, arbitraryTransactionData);
continue;
}
@@ -437,16 +434,6 @@ public class ArbitraryDataCleanupManager extends Thread {
return false;
}
private void removePeersHostingTransactionData(Repository repository, ArbitraryTransactionData transactionData) {
byte[] signature = transactionData.getSignature();
try {
repository.getArbitraryRepository().deleteArbitraryPeersWithSignature(signature);
repository.saveChanges();
} catch (DataException e) {
LOGGER.debug("Unable to delete peers from lookup table for signature: {}", Base58.encode(signature));
}
}
private void cleanupTempDirectory(String folder, long now, long minAge) {
String baseDir = Settings.getInstance().getTempDataPath();
Path tempDir = Paths.get(baseDir, folder);

View File

@@ -5,6 +5,8 @@ import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataFileChunk;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryDirectConnectionInfo;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.arbitrary.ArbitraryRelayInfo;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
@@ -23,6 +25,8 @@ import org.qortal.utils.Triple;
import java.util.*;
import static org.qortal.controller.arbitrary.ArbitraryDataFileManager.MAX_FILE_HASH_RESPONSES;
public class ArbitraryDataFileListManager {
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFileListManager.class);
@@ -59,9 +63,9 @@ public class ArbitraryDataFileListManager {
/** Maximum number of seconds that a file list relay request is able to exist on the network */
private static long RELAY_REQUEST_MAX_DURATION = 5000L;
public static long RELAY_REQUEST_MAX_DURATION = 5000L;
/** Maximum number of hops that a file list relay request is allowed to make */
private static int RELAY_REQUEST_MAX_HOPS = 4;
public static int RELAY_REQUEST_MAX_HOPS = 4;
private ArbitraryDataFileListManager() {
@@ -264,7 +268,7 @@ public class ArbitraryDataFileListManager {
}
this.addToSignatureRequests(signature58, true, false);
List<Peer> handshakedPeers = Network.getInstance().getHandshakedPeers();
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
List<byte[]> missingHashes = null;
// Find hashes that we are missing
@@ -279,8 +283,11 @@ public class ArbitraryDataFileListManager {
LOGGER.debug(String.format("Sending data file list request for signature %s with %d hashes to %d peers...", signature58, hashCount, handshakedPeers.size()));
// FUTURE: send our address as requestingPeer once enough peers have switched to the new protocol
String requestingPeer = null; // Network.getInstance().getOurExternalIpAddressAndPort();
// Build request
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, missingHashes, now, 0);
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, missingHashes, now, 0, requestingPeer);
// Save our request into requests map
Triple<String, Peer, Long> requestEntry = new Triple<>(signature58, null, NTP.getTime());
@@ -338,7 +345,7 @@ public class ArbitraryDataFileListManager {
// This could be optimized in the future
long timestamp = now - 60000L;
List<byte[]> hashes = null;
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, hashes, timestamp, 0);
Message getArbitraryDataFileListMessage = new GetArbitraryDataFileListMessage(signature, hashes, timestamp, 0, null);
// Save our request into requests map
Triple<String, Peer, Long> requestEntry = new Triple<>(signature58, null, NTP.getTime());
@@ -431,7 +438,6 @@ public class ArbitraryDataFileListManager {
}
ArbitraryTransactionData arbitraryTransactionData = null;
ArbitraryDataFileManager arbitraryDataFileManager = ArbitraryDataFileManager.getInstance();
// Check transaction exists and hashes are correct
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -458,16 +464,28 @@ public class ArbitraryDataFileListManager {
// }
if (!isRelayRequest || !Settings.getInstance().isRelayModeEnabled()) {
// Keep track of the hashes this peer reports to have access to
Long now = NTP.getTime();
for (byte[] hash : hashes) {
String hash58 = Base58.encode(hash);
String sig58 = Base58.encode(signature);
ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.put(hash58, new Triple<>(peer, sig58, now));
if (ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.size() < MAX_FILE_HASH_RESPONSES) {
// Keep track of the hashes this peer reports to have access to
for (byte[] hash : hashes) {
String hash58 = Base58.encode(hash);
// Treat null request hops as 100, so that they are able to be sorted (and put to the end of the list)
int requestHops = arbitraryDataFileListMessage.getRequestHops() != null ? arbitraryDataFileListMessage.getRequestHops() : 100;
ArbitraryFileListResponseInfo responseInfo = new ArbitraryFileListResponseInfo(hash58, signature58,
peer, now, arbitraryDataFileListMessage.getRequestTime(), requestHops);
ArbitraryDataFileManager.getInstance().arbitraryDataFileHashResponses.add(responseInfo);
}
}
// Go and fetch the actual data, since this isn't a relay request
arbitraryDataFileManager.fetchArbitraryDataFiles(repository, peer, signature, arbitraryTransactionData, hashes);
// Keep track of the source peer, for direct connections
if (arbitraryDataFileListMessage.getPeerAddress() != null) {
ArbitraryDataFileManager.getInstance().addDirectConnectionInfoIfUnique(
new ArbitraryDirectConnectionInfo(signature, arbitraryDataFileListMessage.getPeerAddress(), hashes, now));
}
}
} catch (DataException e) {
@@ -523,21 +541,30 @@ public class ArbitraryDataFileListManager {
GetArbitraryDataFileListMessage getArbitraryDataFileListMessage = (GetArbitraryDataFileListMessage) message;
byte[] signature = getArbitraryDataFileListMessage.getSignature();
String signature58 = Base58.encode(signature);
List<byte[]> requestedHashes = getArbitraryDataFileListMessage.getHashes();
Long now = NTP.getTime();
Triple<String, Peer, Long> newEntry = new Triple<>(signature58, peer, now);
// If we've seen this request recently, then ignore
if (arbitraryDataFileListRequests.putIfAbsent(message.getId(), newEntry) != null) {
LOGGER.debug("Ignoring hash list request from peer {} for signature {}", peer, signature58);
LOGGER.trace("Ignoring hash list request from peer {} for signature {}", peer, signature58);
return;
}
LOGGER.debug("Received hash list request from peer {} for signature {}", peer, signature58);
List<byte[]> requestedHashes = getArbitraryDataFileListMessage.getHashes();
int hashCount = requestedHashes != null ? requestedHashes.size() : 0;
String requestingPeer = getArbitraryDataFileListMessage.getRequestingPeer();
if (requestingPeer != null) {
LOGGER.debug("Received hash list request with {} hashes from peer {} (requesting peer {}) for signature {}", hashCount, peer, requestingPeer, signature58);
}
else {
LOGGER.debug("Received hash list request with {} hashes from peer {} for signature {}", hashCount, peer, signature58);
}
List<byte[]> hashes = new ArrayList<>();
ArbitraryTransactionData transactionData = null;
boolean allChunksExist = false;
boolean hasMetadata = false;
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -562,6 +589,7 @@ public class ArbitraryDataFileListManager {
// Add the metadata file
if (arbitraryDataFile.getMetadataHash() != null) {
requestedHashes.add(arbitraryDataFile.getMetadataHash());
hasMetadata = true;
}
// Add the chunk hashes
@@ -594,6 +622,12 @@ public class ArbitraryDataFileListManager {
LOGGER.error(String.format("Repository issue while fetching arbitrary file list for peer %s", peer), e);
}
// If the only file we have is the metadata then we shouldn't respond. Most nodes will already have that,
// or can use the separate metadata protocol to fetch it. This should greatly reduce network spam.
if (hasMetadata && hashes.size() == 1) {
hashes.clear();
}
// We should only respond if we have at least one hash
if (hashes.size() > 0) {
@@ -604,7 +638,7 @@ public class ArbitraryDataFileListManager {
arbitraryDataFileListRequests.put(message.getId(), newEntry);
}
String ourAddress = Network.getInstance().getOurExternalIpAddress();
String ourAddress = Network.getInstance().getOurExternalIpAddressAndPort();
ArbitraryDataFileListMessage arbitraryDataFileListMessage = new ArbitraryDataFileListMessage(signature,
hashes, NTP.getTime(), 0, ourAddress, true);
arbitraryDataFileListMessage.setId(message.getId());

View File

@@ -4,6 +4,8 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryDirectConnectionInfo;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.arbitrary.ArbitraryRelayInfo;
import org.qortal.data.network.ArbitraryPeerData;
import org.qortal.data.network.PeerData;
@@ -18,7 +20,6 @@ import org.qortal.settings.Settings;
import org.qortal.utils.ArbitraryTransactionUtils;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.security.SecureRandom;
import java.util.*;
@@ -45,11 +46,17 @@ public class ArbitraryDataFileManager extends Thread {
public List<ArbitraryRelayInfo> arbitraryRelayMap = Collections.synchronizedList(new ArrayList<>());
/**
* Map to keep track of any arbitrary data file hash responses
* Key: string - the hash encoded in base58
* Value: Triple<respondingPeer, signature58, timeResponded>
* List to keep track of any arbitrary data file hash responses
*/
public Map<String, Triple<Peer, String, Long>> arbitraryDataFileHashResponses = Collections.synchronizedMap(new HashMap<>());
public final List<ArbitraryFileListResponseInfo> arbitraryDataFileHashResponses = Collections.synchronizedList(new ArrayList<>());
/**
* List to keep track of peers potentially available for direct connections, based on recent requests
*/
private List<ArbitraryDirectConnectionInfo> directConnectionInfo = Collections.synchronizedList(new ArrayList<>());
public static int MAX_FILE_HASH_RESPONSES = 1000;
private ArbitraryDataFileManager() {
@@ -98,7 +105,10 @@ public class ArbitraryDataFileManager extends Thread {
final long relayMinimumTimestamp = now - ArbitraryDataManager.getInstance().ARBITRARY_RELAY_TIMEOUT;
arbitraryRelayMap.removeIf(entry -> entry == null || entry.getTimestamp() == null || entry.getTimestamp() < relayMinimumTimestamp);
arbitraryDataFileHashResponses.entrySet().removeIf(entry -> entry.getValue().getC() == null || entry.getValue().getC() < relayMinimumTimestamp);
arbitraryDataFileHashResponses.removeIf(entry -> entry.getTimestamp() < relayMinimumTimestamp);
final long directConnectionInfoMinimumTimestamp = now - ArbitraryDataManager.getInstance().ARBITRARY_DIRECT_CONNECTION_INFO_TIMEOUT;
directConnectionInfo.removeIf(entry -> entry.getTimestamp() < directConnectionInfoMinimumTimestamp);
}
@@ -130,7 +140,7 @@ public class ArbitraryDataFileManager extends Thread {
Long startTime = NTP.getTime();
ArbitraryDataFileMessage receivedArbitraryDataFileMessage = fetchArbitraryDataFile(peer, null, signature, hash, null);
Long endTime = NTP.getTime();
if (receivedArbitraryDataFileMessage != null) {
if (receivedArbitraryDataFileMessage != null && receivedArbitraryDataFileMessage.getArbitraryDataFile() != null) {
LOGGER.debug("Received data file {} from peer {}. Time taken: {} ms", receivedArbitraryDataFileMessage.getArbitraryDataFile().getHash58(), peer, (endTime-startTime));
receivedAtLeastOneFile = true;
@@ -158,16 +168,6 @@ public class ArbitraryDataFileManager extends Thread {
}
if (receivedAtLeastOneFile) {
// Update our lookup table to indicate that this peer holds data for this signature
String peerAddress = peer.getPeerData().getAddress().toString();
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peer);
repository.discardChanges();
if (arbitraryPeerData.isPeerAddressValid()) {
LOGGER.debug("Adding arbitrary peer: {} for signature {}", peerAddress, Base58.encode(signature));
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
}
// Invalidate the hosted transactions cache as we are now hosting something new
ArbitraryDataStorageManager.getInstance().invalidateHostedTransactionsCache();
@@ -177,16 +177,7 @@ public class ArbitraryDataFileManager extends Thread {
// We have all the chunks for this transaction, so we should invalidate the transaction's name's
// data cache so that it is rebuilt the next time we serve it
ArbitraryDataManager.getInstance().invalidateCache(arbitraryTransactionData);
// We may also need to broadcast to the network that we are now hosting files for this transaction,
// but only if these files are in accordance with our storage policy
if (ArbitraryDataStorageManager.getInstance().canStoreData(arbitraryTransactionData)) {
// Use a null peer address to indicate our own
Message newArbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, Arrays.asList(signature));
Network.getInstance().broadcast(broadcastPeer -> newArbitrarySignatureMessage);
}
}
}
return receivedAtLeastOneFile;
@@ -296,89 +287,135 @@ public class ArbitraryDataFileManager extends Thread {
// Fetch data directly from peers
private List<ArbitraryDirectConnectionInfo> getDirectConnectionInfoForSignature(byte[] signature) {
synchronized (directConnectionInfo) {
return directConnectionInfo.stream().filter(i -> Arrays.equals(i.getSignature(), signature)).collect(Collectors.toList());
}
}
/**
* Add an ArbitraryDirectConnectionInfo item, but only if one with this peer-signature combination
* doesn't already exist.
* @param connectionInfo - the direct connection info to add
*/
public void addDirectConnectionInfoIfUnique(ArbitraryDirectConnectionInfo connectionInfo) {
boolean peerAlreadyExists;
synchronized (directConnectionInfo) {
peerAlreadyExists = directConnectionInfo.stream()
.anyMatch(i -> Arrays.equals(i.getSignature(), connectionInfo.getSignature())
&& Objects.equals(i.getPeerAddress(), connectionInfo.getPeerAddress()));
}
if (!peerAlreadyExists) {
directConnectionInfo.add(connectionInfo);
}
}
private void removeDirectConnectionInfo(ArbitraryDirectConnectionInfo connectionInfo) {
this.directConnectionInfo.remove(connectionInfo);
}
public boolean fetchDataFilesFromPeersForSignature(byte[] signature) {
String signature58 = Base58.encode(signature);
ArbitraryDataFileListManager.getInstance().addToSignatureRequests(signature58, false, true);
// Firstly fetch peers that claim to be hosting files for this signature
try (final Repository repository = RepositoryManager.getRepository()) {
boolean success = false;
List<ArbitraryPeerData> peers = repository.getArbitraryRepository().getArbitraryPeerDataForSignature(signature);
if (peers == null || peers.isEmpty()) {
LOGGER.debug("No peers found for signature {}", signature58);
return false;
}
LOGGER.debug("Attempting a direct peer connection for signature {}...", signature58);
// Peers found, so pick a random one and request data from it
int index = new SecureRandom().nextInt(peers.size());
ArbitraryPeerData arbitraryPeerData = peers.get(index);
String peerAddressString = arbitraryPeerData.getPeerAddress();
boolean success = Network.getInstance().requestDataFromPeer(peerAddressString, signature);
// Parse the peer address to find the host and port
String host = null;
int port = -1;
String[] parts = peerAddressString.split(":");
if (parts.length > 1) {
host = parts[0];
port = Integer.parseInt(parts[1]);
}
// If unsuccessful, and using a non-standard port, try a second connection with the default listen port,
// since almost all nodes use that. This is a workaround to account for any ephemeral ports that may
// have made it into the dataset.
if (!success) {
if (host != null && port > 0) {
int defaultPort = Settings.getInstance().getDefaultListenPort();
if (port != defaultPort) {
String newPeerAddressString = String.format("%s:%d", host, defaultPort);
success = Network.getInstance().requestDataFromPeer(newPeerAddressString, signature);
}
try {
while (!success) {
if (isStopping) {
return false;
}
}
Thread.sleep(500L);
// If _still_ unsuccessful, try matching the peer's IP address with some known peers, and then connect
// to each of those in turn until one succeeds.
if (!success) {
if (host != null) {
final String finalHost = host;
List<PeerData> knownPeers = Network.getInstance().getAllKnownPeers().stream()
.filter(knownPeerData -> knownPeerData.getAddress().getHost().equals(finalHost))
.collect(Collectors.toList());
// Loop through each match and attempt a connection
for (PeerData matchingPeer : knownPeers) {
String matchingPeerAddress = matchingPeer.getAddress().toString();
success = Network.getInstance().requestDataFromPeer(matchingPeerAddress, signature);
if (success) {
// Successfully connected, so stop making connections
break;
// Firstly fetch peers that claim to be hosting files for this signature
List<ArbitraryDirectConnectionInfo> connectionInfoList = getDirectConnectionInfoForSignature(signature);
if (connectionInfoList == null || connectionInfoList.isEmpty()) {
LOGGER.debug("No remaining direct connection peers found for signature {}", signature58);
return false;
}
LOGGER.debug("Attempting a direct peer connection for signature {}...", signature58);
// Peers found, so pick one with the highest number of chunks
Comparator<ArbitraryDirectConnectionInfo> highestChunkCountFirstComparator =
Comparator.comparingInt(ArbitraryDirectConnectionInfo::getHashCount).reversed();
ArbitraryDirectConnectionInfo directConnectionInfo = connectionInfoList.stream()
.sorted(highestChunkCountFirstComparator).findFirst().orElse(null);
if (directConnectionInfo == null) {
return false;
}
// Remove from the list so that a different peer is tried next time
removeDirectConnectionInfo(directConnectionInfo);
String peerAddressString = directConnectionInfo.getPeerAddress();
// Parse the peer address to find the host and port
String host = null;
int port = -1;
String[] parts = peerAddressString.split(":");
if (parts.length > 1) {
host = parts[0];
port = Integer.parseInt(parts[1]);
} else {
// Assume no port included
host = peerAddressString;
// Use default listen port
port = Settings.getInstance().getDefaultListenPort();
}
String peerAddressStringWithPort = String.format("%s:%d", host, port);
success = Network.getInstance().requestDataFromPeer(peerAddressStringWithPort, signature);
int defaultPort = Settings.getInstance().getDefaultListenPort();
// If unsuccessful, and using a non-standard port, try a second connection with the default listen port,
// since almost all nodes use that. This is a workaround to account for any ephemeral ports that may
// have made it into the dataset.
if (!success) {
if (host != null && port > 0) {
if (port != defaultPort) {
String newPeerAddressString = String.format("%s:%d", host, defaultPort);
success = Network.getInstance().requestDataFromPeer(newPeerAddressString, signature);
}
}
}
}
// Keep track of the success or failure
arbitraryPeerData.markAsAttempted();
if (success) {
arbitraryPeerData.markAsRetrieved();
arbitraryPeerData.incrementSuccesses();
}
else {
arbitraryPeerData.incrementFailures();
}
repository.discardChanges();
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
// If _still_ unsuccessful, try matching the peer's IP address with some known peers, and then connect
// to each of those in turn until one succeeds.
if (!success) {
if (host != null) {
final String finalHost = host;
List<PeerData> knownPeers = Network.getInstance().getAllKnownPeers().stream()
.filter(knownPeerData -> knownPeerData.getAddress().getHost().equals(finalHost))
.collect(Collectors.toList());
// Loop through each match and attempt a connection
for (PeerData matchingPeer : knownPeers) {
String matchingPeerAddress = matchingPeer.getAddress().toString();
int matchingPeerPort = matchingPeer.getAddress().getPort();
// Make sure that it's not a port we've already tried
if (matchingPeerPort != port && matchingPeerPort != defaultPort) {
success = Network.getInstance().requestDataFromPeer(matchingPeerAddress, signature);
if (success) {
// Successfully connected, so stop making connections
break;
}
}
}
}
}
return success;
if (success) {
// We were able to connect with a peer, so track the request
ArbitraryDataFileListManager.getInstance().addToSignatureRequests(signature58, false, true);
}
} catch (DataException e) {
LOGGER.debug("Unable to fetch peer list from repository");
}
} catch (InterruptedException e) {
// Do nothing
}
return false;
return success;
}

View File

@@ -3,6 +3,7 @@ package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.network.Peer;
import org.qortal.repository.DataException;
@@ -11,11 +12,9 @@ import org.qortal.repository.RepositoryManager;
import org.qortal.utils.ArbitraryTransactionUtils;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.util.Arrays;
import java.util.Iterator;
import java.util.Map;
import java.util.*;
import java.util.stream.Collectors;
public class ArbitraryDataFileRequestThread implements Runnable {
@@ -51,45 +50,47 @@ public class ArbitraryDataFileRequestThread implements Runnable {
boolean shouldProcess = false;
synchronized (arbitraryDataFileManager.arbitraryDataFileHashResponses) {
Iterator iterator = arbitraryDataFileManager.arbitraryDataFileHashResponses.entrySet().iterator();
while (iterator.hasNext()) {
if (Controller.isStopping()) {
return;
}
if (!arbitraryDataFileManager.arbitraryDataFileHashResponses.isEmpty()) {
Map.Entry entry = (Map.Entry) iterator.next();
if (entry == null || entry.getKey() == null || entry.getValue() == null) {
// Sort by lowest number of node hops first
Comparator<ArbitraryFileListResponseInfo> lowestHopsFirstComparator =
Comparator.comparingInt(ArbitraryFileListResponseInfo::getRequestHops);
arbitraryDataFileManager.arbitraryDataFileHashResponses.sort(lowestHopsFirstComparator);
Iterator iterator = arbitraryDataFileManager.arbitraryDataFileHashResponses.iterator();
while (iterator.hasNext()) {
if (Controller.isStopping()) {
return;
}
ArbitraryFileListResponseInfo responseInfo = (ArbitraryFileListResponseInfo) iterator.next();
if (responseInfo == null) {
iterator.remove();
continue;
}
hash58 = responseInfo.getHash58();
peer = responseInfo.getPeer();
signature58 = responseInfo.getSignature58();
Long timestamp = responseInfo.getTimestamp();
if (now - timestamp >= ArbitraryDataManager.ARBITRARY_RELAY_TIMEOUT || signature58 == null || peer == null) {
// Ignore - to be deleted
iterator.remove();
continue;
}
// Skip if already requesting, but don't remove, as we might want to retry later
if (arbitraryDataFileManager.arbitraryDataFileRequests.containsKey(hash58)) {
// Already requesting - leave this attempt for later
continue;
}
// We want to process this file
shouldProcess = true;
iterator.remove();
continue;
break;
}
hash58 = (String) entry.getKey();
Triple<Peer, String, Long> value = (Triple<Peer, String, Long>) entry.getValue();
if (value == null) {
iterator.remove();
continue;
}
peer = value.getA();
signature58 = value.getB();
Long timestamp = value.getC();
if (now - timestamp >= ArbitraryDataManager.ARBITRARY_RELAY_TIMEOUT || signature58 == null || peer == null) {
// Ignore - to be deleted
iterator.remove();
continue;
}
// Skip if already requesting, but don't remove, as we might want to retry later
if (arbitraryDataFileManager.arbitraryDataFileRequests.containsKey(hash58)) {
// Already requesting - leave this attempt for later
continue;
}
// We want to process this file
shouldProcess = true;
iterator.remove();
break;
}
}

View File

@@ -1,23 +1,24 @@
package org.qortal.controller.arbitrary;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataResource;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.data.network.ArbitraryPeerData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.list.ResourceListManager;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -43,9 +44,18 @@ public class ArbitraryDataManager extends Thread {
/** Maximum time to hold information about an in-progress relay */
public static final long ARBITRARY_RELAY_TIMEOUT = 60 * 1000L; // ms
/** Maximum time to hold direct peer connection information */
public static final long ARBITRARY_DIRECT_CONNECTION_INFO_TIMEOUT = 2 * 60 * 1000L; // ms
/** Maximum number of hops that an arbitrary signatures request is allowed to make */
private static int ARBITRARY_SIGNATURES_REQUEST_MAX_HOPS = 3;
private long lastMetadataFetchTime = 0L;
private static long METADATA_FETCH_INTERVAL = 5 * 60 * 1000L;
private long lastDataFetchTime = 0L;
private static long DATA_FETCH_INTERVAL = 1 * 60 * 1000L;
private static ArbitraryDataManager instance;
private final Object peerDataLock = new Object();
@@ -79,6 +89,9 @@ public class ArbitraryDataManager extends Thread {
public void run() {
Thread.currentThread().setName("Arbitrary Data Manager");
// Create data directory in case it doesn't exist yet
this.createDataDirectory();
try {
// Wait for node to finish starting up and making connections
Thread.sleep(2 * 60 * 1000L);
@@ -92,7 +105,13 @@ public class ArbitraryDataManager extends Thread {
continue;
}
List<Peer> peers = Network.getInstance().getHandshakedPeers();
Long now = NTP.getTime();
if (now == null) {
continue;
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
@@ -102,6 +121,21 @@ public class ArbitraryDataManager extends Thread {
continue;
}
// Fetch metadata
if (NTP.getTime() - lastMetadataFetchTime >= METADATA_FETCH_INTERVAL) {
this.fetchAllMetadata();
lastMetadataFetchTime = NTP.getTime();
}
// Check if we need to fetch any data
if (NTP.getTime() - lastDataFetchTime < DATA_FETCH_INTERVAL) {
// Nothing to do yet
continue;
}
// In case the data directory has been deleted...
this.createDataDirectory();
// Fetch data according to storage policy
switch (Settings.getInstance().getStoragePolicy()) {
case FOLLOWED:
@@ -111,6 +145,7 @@ public class ArbitraryDataManager extends Thread {
case ALL:
this.processAll();
break;
case NONE:
case VIEWED:
@@ -119,6 +154,8 @@ public class ArbitraryDataManager extends Thread {
Thread.sleep(60000);
break;
}
lastDataFetchTime = NTP.getTime();
}
} catch (InterruptedException e) {
// Fall-through to exit thread...
@@ -130,7 +167,7 @@ public class ArbitraryDataManager extends Thread {
this.interrupt();
}
private void processNames() {
private void processNames() throws InterruptedException {
// Fetch latest list of followed names
List<String> followedNames = ResourceListManager.getInstance().getStringsInList("followedNames");
if (followedNames == null || followedNames.isEmpty()) {
@@ -143,11 +180,11 @@ public class ArbitraryDataManager extends Thread {
}
}
private void processAll() {
private void processAll() throws InterruptedException {
this.fetchAndProcessTransactions(null);
}
private void fetchAndProcessTransactions(String name) {
private void fetchAndProcessTransactions(String name) throws InterruptedException {
ArbitraryDataStorageManager storageManager = ArbitraryDataStorageManager.getInstance();
// Paginate queries when fetching arbitrary transactions
@@ -155,6 +192,7 @@ public class ArbitraryDataManager extends Thread {
int offset = 0;
while (!isStopping) {
Thread.sleep(1000L);
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -169,6 +207,7 @@ public class ArbitraryDataManager extends Thread {
// Loop through signatures and remove ones we don't need to process
Iterator iterator = signatures.iterator();
while (iterator.hasNext()) {
Thread.sleep(25L); // Reduce CPU usage
byte[] signature = (byte[]) iterator.next();
ArbitraryTransaction arbitraryTransaction = fetchTransaction(repository, signature);
@@ -225,6 +264,85 @@ public class ArbitraryDataManager extends Thread {
}
}
private void fetchAllMetadata() throws InterruptedException {
ArbitraryDataStorageManager storageManager = ArbitraryDataStorageManager.getInstance();
// Paginate queries when fetching arbitrary transactions
final int limit = 100;
int offset = 0;
while (!isStopping) {
Thread.sleep(1000L);
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, null, null, ConfirmationStatus.BOTH, limit, offset, true);
// LOGGER.trace("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
if (signatures == null || signatures.isEmpty()) {
offset = 0;
break;
}
offset += limit;
// Loop through signatures and remove ones we don't need to process
Iterator iterator = signatures.iterator();
while (iterator.hasNext()) {
Thread.sleep(25L); // Reduce CPU usage
byte[] signature = (byte[]) iterator.next();
ArbitraryTransaction arbitraryTransaction = fetchTransaction(repository, signature);
if (arbitraryTransaction == null) {
// Best not to process this one
iterator.remove();
continue;
}
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
// Skip transactions that are blocked
if (storageManager.isBlocked(arbitraryTransactionData)) {
iterator.remove();
continue;
}
// Remove transactions that we already have local data for
if (hasLocalMetadata(arbitraryTransaction)) {
iterator.remove();
continue;
}
}
if (signatures.isEmpty()) {
continue;
}
// Pick one at random
final int index = new Random().nextInt(signatures.size());
byte[] signature = signatures.get(index);
if (signature == null) {
continue;
}
// Check to see if we have had a more recent PUT
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
if (hasMoreRecentPutTransaction) {
// There is a more recent PUT transaction than the one we are currently processing.
// When a PUT is issued, it replaces any layers that would have been there before.
// Therefore any data relating to this older transaction is no longer needed and we
// shouldn't fetch it from the network.
continue;
}
// Ask our connected peers if they have metadata for this signature
fetchMetadata(arbitraryTransactionData);
} catch (DataException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
}
}
}
private ArbitraryTransaction fetchTransaction(final Repository repository, byte[] signature) {
try {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
@@ -244,16 +362,48 @@ public class ArbitraryDataManager extends Thread {
} catch (DataException e) {
LOGGER.error("Repository issue when checking arbitrary transaction's data is local", e);
return true;
return true; // Assume true for now, to avoid network spam on error
}
}
private boolean hasLocalMetadata(ArbitraryTransaction arbitraryTransaction) {
try {
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
byte[] signature = arbitraryTransactionData.getSignature();
byte[] metadataHash = arbitraryTransactionData.getMetadataHash();
if (metadataHash == null) {
// This transaction doesn't have metadata associated with it, so return true to indicate that we have everything
return true;
}
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
return metadataFile.exists();
} catch (DataException e) {
LOGGER.error("Repository issue when checking arbitrary transaction's metadata is local", e);
return true; // Assume true for now, to avoid network spam on error
}
}
// Entrypoint to request new data from peers
public boolean fetchData(ArbitraryTransactionData arbitraryTransactionData) {
return ArbitraryDataFileListManager.getInstance().fetchArbitraryDataFileList(arbitraryTransactionData);
}
// Entrypoint to request new metadata from peers
public ArbitraryDataTransactionMetadata fetchMetadata(ArbitraryTransactionData arbitraryTransactionData) {
ArbitraryDataResource resource = new ArbitraryDataResource(
arbitraryTransactionData.getName(),
ArbitraryDataFile.ResourceIdType.NAME,
arbitraryTransactionData.getService(),
arbitraryTransactionData.getIdentifier()
);
return ArbitraryMetadataManager.getInstance().fetchMetadata(resource, true);
}
// Useful methods used by other parts of the app
@@ -278,6 +428,9 @@ public class ArbitraryDataManager extends Thread {
// Cleanup file request caches
ArbitraryDataFileManager.getInstance().cleanupRequestCache(now);
// Clean up metadata request caches
ArbitraryMetadataManager.getInstance().cleanupRequestCache(now);
}
public boolean isResourceCached(ArbitraryDataResource resource) {
@@ -365,95 +518,19 @@ public class ArbitraryDataManager extends Thread {
}
}
// Broadcast list of hosted signatures
public void broadcastHostedSignatureList() {
try (final Repository repository = RepositoryManager.getRepository()) {
List<ArbitraryTransactionData> hostedTransactions = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, null, null);
List<byte[]> hostedSignatures = hostedTransactions.stream().map(ArbitraryTransactionData::getSignature).collect(Collectors.toList());
if (!hostedSignatures.isEmpty()) {
// Broadcast the list, using null to represent our peer address
LOGGER.info("Broadcasting list of hosted signatures...");
Message arbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, hostedSignatures);
Network.getInstance().broadcast(broadcastPeer -> arbitrarySignatureMessage);
}
} catch (DataException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction data for broadcast", e);
private boolean createDataDirectory() {
// Create the data directory if it doesn't exist
String dataPath = Settings.getInstance().getDataPath();
Path dataDirectory = Paths.get(dataPath);
try {
Files.createDirectories(dataDirectory);
} catch (IOException e) {
LOGGER.error("Unable to create data directory");
return false;
}
return true;
}
// Handle incoming arbitrary signatures messages
public void onNetworkArbitrarySignaturesMessage(Peer peer, Message message) {
// Don't process if QDN is disabled
if (!Settings.getInstance().isQdnEnabled()) {
return;
}
LOGGER.debug("Received arbitrary signature list from peer {}", peer);
ArbitrarySignaturesMessage arbitrarySignaturesMessage = (ArbitrarySignaturesMessage) message;
List<byte[]> signatures = arbitrarySignaturesMessage.getSignatures();
String peerAddress = peer.getPeerData().getAddress().toString();
if (arbitrarySignaturesMessage.getPeerAddress() != null && !arbitrarySignaturesMessage.getPeerAddress().isEmpty()) {
// This message is about a different peer than the one that sent it
peerAddress = arbitrarySignaturesMessage.getPeerAddress();
}
boolean containsNewEntry = false;
// Synchronize peer data lookups to make this process thread safe. Otherwise we could broadcast
// the same data multiple times, due to more than one thread processing the same message from different peers
synchronized (this.peerDataLock) {
try (final Repository repository = RepositoryManager.getRepository()) {
for (byte[] signature : signatures) {
// Check if a record already exists for this hash/host combination
// The port is not checked here - only the host/ip - in order to avoid duplicates
// from filling up the db due to dynamic/ephemeral ports
ArbitraryPeerData existingEntry = repository.getArbitraryRepository()
.getArbitraryPeerDataForSignatureAndHost(signature, peer.getPeerData().getAddress().getHost());
if (existingEntry == null) {
// We haven't got a record of this mapping yet, so add it
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(signature, peerAddress);
repository.discardChanges();
if (arbitraryPeerData.isPeerAddressValid()) {
LOGGER.debug("Adding arbitrary peer: {} for signature {}", peerAddress, Base58.encode(signature));
repository.getArbitraryRepository().save(arbitraryPeerData);
repository.saveChanges();
// Remember that this data is new, so that it can be rebroadcast later
containsNewEntry = true;
}
}
}
// If at least one signature in this batch was new to us, we should rebroadcast the message to the
// network in case some peers haven't received it yet
if (containsNewEntry) {
int requestHops = arbitrarySignaturesMessage.getRequestHops();
arbitrarySignaturesMessage.setRequestHops(++requestHops);
if (requestHops < ARBITRARY_SIGNATURES_REQUEST_MAX_HOPS) {
LOGGER.debug("Rebroadcasting arbitrary signature list for peer {}. requestHops: {}", peerAddress, requestHops);
Network.getInstance().broadcast(broadcastPeer -> broadcastPeer == peer ? null : arbitrarySignaturesMessage);
}
} else {
// Don't rebroadcast as otherwise we could get into a loop
}
// If anything needed saving, it would already have called saveChanges() above
repository.discardChanges();
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while processing arbitrary transaction signature list from peer %s", peer), e);
}
}
}
public int getPowDifficulty() {
return this.powDifficulty;
}

View File

@@ -47,6 +47,9 @@ public class ArbitraryDataStorageManager extends Thread {
private List<ArbitraryTransactionData> hostedTransactions;
private String searchQuery;
private List<ArbitraryTransactionData> searchResultsTransactions;
private static final long DIRECTORY_SIZE_CHECK_INTERVAL = 10 * 60 * 1000L; // 10 minutes
/** Treat storage as full at 90% usage, to reduce risk of going over the limit.
@@ -226,6 +229,16 @@ public class ArbitraryDataStorageManager extends Thread {
}
}
/**
* Check if data relating to a transaction is blocked by this node.
*
* @param arbitraryTransactionData - the transaction
* @return boolean - whether the resource is blocked or not
*/
public boolean isBlocked(ArbitraryTransactionData arbitraryTransactionData) {
return isNameBlocked(arbitraryTransactionData.getName());
}
private boolean isDataTypeAllowed(ArbitraryTransactionData arbitraryTransactionData) {
byte[] secret = arbitraryTransactionData.getSecret();
boolean hasSecret = (secret != null && secret.length == 32);
@@ -258,14 +271,8 @@ public class ArbitraryDataStorageManager extends Thread {
}
// Hosted data
public List<ArbitraryTransactionData> listAllHostedTransactions(Repository repository, Integer limit, Integer offset) {
// Load from cache if we can, to avoid disk reads
if (this.hostedTransactions != null) {
return ArbitraryTransactionUtils.limitOffsetTransactions(this.hostedTransactions, limit, offset);
}
public List<ArbitraryTransactionData> loadAllHostedTransactions(Repository repository) {
List<ArbitraryTransactionData> arbitraryTransactionDataList = new ArrayList<>();
// Find all hosted paths
@@ -286,7 +293,21 @@ public class ArbitraryDataStorageManager extends Thread {
if (transactionData == null || transactionData.getType() != Transaction.TransactionType.ARBITRARY) {
continue;
}
arbitraryTransactionDataList.add((ArbitraryTransactionData) transactionData);
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) transactionData;
// Make sure to exclude metadata-only resources
if (arbitraryTransactionData.getMetadataHash() != null) {
if (contents.length == 1) {
String metadataHash58 = Base58.encode(arbitraryTransactionData.getMetadataHash());
if (Objects.equals(metadataHash58, contents[0])) {
// We only have the metadata file for this resource, not the actual data, so exclude it
continue;
}
}
}
// Found some data matching a transaction, so add it to the list
arbitraryTransactionDataList.add(arbitraryTransactionData);
} catch (DataException e) {
continue;
@@ -296,10 +317,69 @@ public class ArbitraryDataStorageManager extends Thread {
// Sort by newest first
arbitraryTransactionDataList.sort(Comparator.comparingLong(ArbitraryTransactionData::getTimestamp).reversed());
// Update cache
this.hostedTransactions = arbitraryTransactionDataList;
return arbitraryTransactionDataList;
}
// Hosted data
return ArbitraryTransactionUtils.limitOffsetTransactions(arbitraryTransactionDataList, limit, offset);
public List<ArbitraryTransactionData> listAllHostedTransactions(Repository repository, Integer limit, Integer offset) {
// Load from cache if we can, to avoid disk reads
if (this.hostedTransactions != null) {
return ArbitraryTransactionUtils.limitOffsetTransactions(this.hostedTransactions, limit, offset);
}
this.hostedTransactions = this.loadAllHostedTransactions(repository);
return ArbitraryTransactionUtils.limitOffsetTransactions(this.hostedTransactions, limit, offset);
}
/**
* searchHostedTransactions
* Allow to run a query against hosted data names and return matches if there are any
* @param repository
* @param query
* @param limit
* @param offset
* @return
*/
public List<ArbitraryTransactionData> searchHostedTransactions(Repository repository, String query, Integer limit, Integer offset) {
// Load from results cache if we can (results that exists for the same query), to avoid disk reads
if (this.searchResultsTransactions != null && this.searchQuery.equals(query.toLowerCase())) {
return ArbitraryTransactionUtils.limitOffsetTransactions(this.searchResultsTransactions, limit, offset);
}
// Using cache if we can, to avoid disk reads
if (this.hostedTransactions == null) {
this.hostedTransactions = this.loadAllHostedTransactions(repository);
}
this.searchQuery = query.toLowerCase(); //set the searchQuery so that it can be checked on the next call
List<ArbitraryTransactionData> searchResultsList = new ArrayList<>();
// Loop through cached hostedTransactions
for (ArbitraryTransactionData atd : this.hostedTransactions) {
try {
if (atd.getName() != null && atd.getName().toLowerCase().contains(this.searchQuery)) {
searchResultsList.add(atd);
}
else if (atd.getIdentifier() != null && atd.getIdentifier().toLowerCase().contains(this.searchQuery)) {
searchResultsList.add(atd);
}
} catch (Exception e) {
continue;
}
}
// Sort by newest first
searchResultsList.sort(Comparator.comparingLong(ArbitraryTransactionData::getTimestamp).reversed());
// Update cache
this.searchResultsTransactions = searchResultsList;
return ArbitraryTransactionUtils.limitOffsetTransactions(this.searchResultsTransactions, limit, offset);
}
/**

View File

@@ -0,0 +1,452 @@
package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataResource;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.controller.Controller;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.io.IOException;
import java.util.*;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_DURATION;
import static org.qortal.controller.arbitrary.ArbitraryDataFileListManager.RELAY_REQUEST_MAX_HOPS;
public class ArbitraryMetadataManager {
private static final Logger LOGGER = LogManager.getLogger(ArbitraryMetadataManager.class);
private static ArbitraryMetadataManager instance;
/**
* Map of recent incoming requests for ARBITRARY transaction metadata.
* <p>
* Key is original request's message ID<br>
* Value is Triple&lt;transaction signature in base58, first requesting peer, first request's timestamp&gt;
* <p>
* If peer is null then either:<br>
* <ul>
* <li>we are the original requesting peer</li>
* <li>we have already sent data payload to original requesting peer.</li>
* </ul>
* If signature is null then we have already received the file list and either:<br>
* <ul>
* <li>we are the original requesting peer and have processed it</li>
* <li>we have forwarded the metadata</li>
* </ul>
*/
public Map<Integer, Triple<String, Peer, Long>> arbitraryMetadataRequests = Collections.synchronizedMap(new HashMap<>());
/**
* Map to keep track of in progress arbitrary metadata requests
* Key: string - the signature encoded in base58
* Value: Triple<networkBroadcastCount, directPeerRequestCount, lastAttemptTimestamp>
*/
private Map<String, Triple<Integer, Integer, Long>> arbitraryMetadataSignatureRequests = Collections.synchronizedMap(new HashMap<>());
private ArbitraryMetadataManager() {
}
public static ArbitraryMetadataManager getInstance() {
if (instance == null)
instance = new ArbitraryMetadataManager();
return instance;
}
public void cleanupRequestCache(Long now) {
if (now == null) {
return;
}
final long requestMinimumTimestamp = now - ArbitraryDataManager.ARBITRARY_REQUEST_TIMEOUT;
arbitraryMetadataRequests.entrySet().removeIf(entry -> entry.getValue().getC() == null || entry.getValue().getC() < requestMinimumTimestamp);
}
public ArbitraryDataTransactionMetadata fetchMetadata(ArbitraryDataResource arbitraryDataResource, boolean useRateLimiter) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Find latest transaction
ArbitraryTransactionData latestTransaction = repository.getArbitraryRepository()
.getLatestTransaction(arbitraryDataResource.getResourceId(), arbitraryDataResource.getService(),
null, arbitraryDataResource.getIdentifier());
if (latestTransaction != null) {
byte[] signature = latestTransaction.getSignature();
byte[] metadataHash = latestTransaction.getMetadataHash();
if (metadataHash == null) {
// This resource doesn't have metadata
throw new IllegalArgumentException("This resource doesn't have metadata");
}
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
if (!metadataFile.exists()) {
// Request from network
this.fetchArbitraryMetadata(latestTransaction, useRateLimiter);
}
// Now check again as it may have been downloaded above
if (metadataFile.exists()) {
// Use local copy
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
transactionMetadata.read();
return transactionMetadata;
}
}
} catch (DataException | IOException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction metadata", e);
}
return null;
}
// Request metadata from network
public byte[] fetchArbitraryMetadata(ArbitraryTransactionData arbitraryTransactionData, boolean useRateLimiter) {
byte[] metadataHash = arbitraryTransactionData.getMetadataHash();
if (metadataHash == null) {
return null;
}
byte[] signature = arbitraryTransactionData.getSignature();
String signature58 = Base58.encode(signature);
// Require an NTP sync
Long now = NTP.getTime();
if (now == null) {
return null;
}
// If we've already tried too many times in a short space of time, make sure to give up
if (useRateLimiter && !this.shouldMakeMetadataRequestForSignature(signature58)) {
LOGGER.trace("Skipping metadata request for signature {} due to rate limit", signature58);
return null;
}
this.addToSignatureRequests(signature58, true, false);
List<Peer> handshakedPeers = Network.getInstance().getImmutableHandshakedPeers();
LOGGER.debug(String.format("Sending metadata request for signature %s to %d peers...", signature58, handshakedPeers.size()));
// Build request
Message getArbitraryMetadataMessage = new GetArbitraryMetadataMessage(signature, now, 0);
// Save our request into requests map
Triple<String, Peer, Long> requestEntry = new Triple<>(signature58, null, NTP.getTime());
// Assign random ID to this message
int id;
do {
id = new Random().nextInt(Integer.MAX_VALUE - 1) + 1;
// Put queue into map (keyed by message ID) so we can poll for a response
// If putIfAbsent() doesn't return null, then this ID is already taken
} while (arbitraryMetadataRequests.put(id, requestEntry) != null);
getArbitraryMetadataMessage.setId(id);
// Broadcast request
Network.getInstance().broadcast(peer -> getArbitraryMetadataMessage);
// Poll to see if data has arrived
final long singleWait = 100;
long totalWait = 0;
while (totalWait < ArbitraryDataManager.ARBITRARY_REQUEST_TIMEOUT) {
try {
Thread.sleep(singleWait);
} catch (InterruptedException e) {
break;
}
requestEntry = arbitraryMetadataRequests.get(id);
if (requestEntry == null)
return null;
if (requestEntry.getA() == null)
break;
totalWait += singleWait;
}
try {
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
if (metadataFile.exists()) {
return metadataFile.getBytes();
}
} catch (DataException e) {
// Do nothing
}
return null;
}
// Track metadata lookups by signature
private boolean shouldMakeMetadataRequestForSignature(String signature58) {
Triple<Integer, Integer, Long> request = arbitraryMetadataSignatureRequests.get(signature58);
if (request == null) {
// Not attempted yet
return true;
}
// Extract the components
Integer networkBroadcastCount = request.getA();
// Integer directPeerRequestCount = request.getB();
Long lastAttemptTimestamp = request.getC();
if (lastAttemptTimestamp == null) {
// Not attempted yet
return true;
}
long timeSinceLastAttempt = NTP.getTime() - lastAttemptTimestamp;
// Allow a second attempt after 60 seconds
if (timeSinceLastAttempt > 60 * 1000L) {
// We haven't tried for at least 60 seconds
if (networkBroadcastCount < 2) {
// We've made less than 2 total attempts
return true;
}
}
// Then allow another attempt after 60 minutes
if (timeSinceLastAttempt > 60 * 60 * 1000L) {
// We haven't tried for at least 60 minutes
if (networkBroadcastCount < 3) {
// We've made less than 3 total attempts
return true;
}
}
return false;
}
public boolean isSignatureRateLimited(byte[] signature) {
String signature58 = Base58.encode(signature);
return !this.shouldMakeMetadataRequestForSignature(signature58);
}
public long lastRequestForSignature(byte[] signature) {
String signature58 = Base58.encode(signature);
Triple<Integer, Integer, Long> request = arbitraryMetadataSignatureRequests.get(signature58);
if (request == null) {
// Not attempted yet
return 0;
}
// Extract the components
Long lastAttemptTimestamp = request.getC();
if (lastAttemptTimestamp != null) {
return lastAttemptTimestamp;
}
return 0;
}
public void addToSignatureRequests(String signature58, boolean incrementNetworkRequests, boolean incrementPeerRequests) {
Triple<Integer, Integer, Long> request = arbitraryMetadataSignatureRequests.get(signature58);
Long now = NTP.getTime();
if (request == null) {
// No entry yet
Triple<Integer, Integer, Long> newRequest = new Triple<>(0, 0, now);
arbitraryMetadataSignatureRequests.put(signature58, newRequest);
}
else {
// There is an existing entry
if (incrementNetworkRequests) {
request.setA(request.getA() + 1);
}
if (incrementPeerRequests) {
request.setB(request.getB() + 1);
}
request.setC(now);
arbitraryMetadataSignatureRequests.put(signature58, request);
}
}
public void removeFromSignatureRequests(String signature58) {
arbitraryMetadataSignatureRequests.remove(signature58);
}
// Network handlers
public void onNetworkArbitraryMetadataMessage(Peer peer, Message message) {
// Don't process if QDN is disabled
if (!Settings.getInstance().isQdnEnabled()) {
return;
}
ArbitraryMetadataMessage arbitraryMetadataMessage = (ArbitraryMetadataMessage) message;
LOGGER.debug("Received metadata from peer {}", peer);
// Do we have a pending request for this data?
Triple<String, Peer, Long> request = arbitraryMetadataRequests.get(message.getId());
if (request == null || request.getA() == null) {
return;
}
boolean isRelayRequest = (request.getB() != null);
// Does this message's signature match what we're expecting?
byte[] signature = arbitraryMetadataMessage.getSignature();
String signature58 = Base58.encode(signature);
if (!request.getA().equals(signature58)) {
return;
}
// Update requests map to reflect that we've received all chunks
Triple<String, Peer, Long> newEntry = new Triple<>(null, null, request.getC());
arbitraryMetadataRequests.put(message.getId(), newEntry);
ArbitraryTransactionData arbitraryTransactionData = null;
// Forwarding
if (isRelayRequest && Settings.getInstance().isRelayModeEnabled()) {
// Get transaction info
try (final Repository repository = RepositoryManager.getRepository()) {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (!(transactionData instanceof ArbitraryTransactionData))
return;
arbitraryTransactionData = (ArbitraryTransactionData) transactionData;
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while finding arbitrary transaction metadata for peer %s", peer), e);
}
// Check if the name is blocked
boolean isBlocked = (arbitraryTransactionData == null || ArbitraryDataStorageManager.getInstance().isNameBlocked(arbitraryTransactionData.getName()));
if (!isBlocked) {
Peer requestingPeer = request.getB();
if (requestingPeer != null) {
// Forward to requesting peer
LOGGER.debug("Forwarding metadata to requesting peer: {}", requestingPeer);
if (!requestingPeer.sendMessage(arbitraryMetadataMessage)) {
requestingPeer.disconnect("failed to forward arbitrary metadata");
}
}
}
}
}
public void onNetworkGetArbitraryMetadataMessage(Peer peer, Message message) {
// Don't respond if QDN is disabled
if (!Settings.getInstance().isQdnEnabled()) {
return;
}
Controller.getInstance().stats.getArbitraryMetadataMessageStats.requests.incrementAndGet();
GetArbitraryMetadataMessage getArbitraryMetadataMessage = (GetArbitraryMetadataMessage) message;
byte[] signature = getArbitraryMetadataMessage.getSignature();
String signature58 = Base58.encode(signature);
Long now = NTP.getTime();
Triple<String, Peer, Long> newEntry = new Triple<>(signature58, peer, now);
// If we've seen this request recently, then ignore
if (arbitraryMetadataRequests.putIfAbsent(message.getId(), newEntry) != null) {
LOGGER.debug("Ignoring metadata request from peer {} for signature {}", peer, signature58);
return;
}
LOGGER.debug("Received metadata request from peer {} for signature {}", peer, signature58);
ArbitraryTransactionData transactionData = null;
ArbitraryDataFile metadataFile = null;
try (final Repository repository = RepositoryManager.getRepository()) {
// Firstly we need to lookup this file on chain to get its metadata hash
transactionData = (ArbitraryTransactionData)repository.getTransactionRepository().fromSignature(signature);
if (transactionData instanceof ArbitraryTransactionData) {
// Check if we're even allowed to serve metadata for this transaction
if (ArbitraryDataStorageManager.getInstance().canStoreData(transactionData)) {
byte[] metadataHash = transactionData.getMetadataHash();
if (metadataHash != null) {
// Load metadata file
metadataFile = ArbitraryDataFile.fromHash(metadataHash, signature);
}
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while fetching arbitrary metadata for peer %s", peer), e);
}
// We should only respond if we have the metadata file
if (metadataFile != null && metadataFile.exists()) {
// We have the metadata file, so update requests map to reflect that we've sent it
newEntry = new Triple<>(null, null, now);
arbitraryMetadataRequests.put(message.getId(), newEntry);
ArbitraryMetadataMessage arbitraryMetadataMessage = new ArbitraryMetadataMessage(signature, metadataFile);
arbitraryMetadataMessage.setId(message.getId());
if (!peer.sendMessage(arbitraryMetadataMessage)) {
LOGGER.debug("Couldn't send metadata");
peer.disconnect("failed to send metadata");
return;
}
LOGGER.debug("Sent metadata");
// Nothing left to do, so return to prevent any unnecessary forwarding from occurring
LOGGER.debug("No need for any forwarding because metadata request is fully served");
return;
}
// We may need to forward this request on
boolean isBlocked = (transactionData == null || ArbitraryDataStorageManager.getInstance().isNameBlocked(transactionData.getName()));
if (Settings.getInstance().isRelayModeEnabled() && !isBlocked) {
// In relay mode - so ask our other peers if they have it
long requestTime = getArbitraryMetadataMessage.getRequestTime();
int requestHops = getArbitraryMetadataMessage.getRequestHops();
getArbitraryMetadataMessage.setRequestHops(++requestHops);
long totalRequestTime = now - requestTime;
if (totalRequestTime < RELAY_REQUEST_MAX_DURATION) {
// Relay request hasn't timed out yet, so can potentially be rebroadcast
if (requestHops < RELAY_REQUEST_MAX_HOPS) {
// Relay request hasn't reached the maximum number of hops yet, so can be rebroadcast
LOGGER.debug("Rebroadcasting metadata request from peer {} for signature {} to our other peers... totalRequestTime: {}, requestHops: {}", peer, Base58.encode(signature), totalRequestTime, requestHops);
Network.getInstance().broadcast(
broadcastPeer -> broadcastPeer == peer ||
Objects.equals(broadcastPeer.getPeerData().getAddress().getHost(), peer.getPeerData().getAddress().getHost())
? null : getArbitraryMetadataMessage);
}
else {
// This relay request has reached the maximum number of allowed hops
}
}
else {
// This relay request has timed out
}
}
}
}

View File

@@ -29,6 +29,15 @@ public class NamesDatabaseIntegrityCheck {
private List<TransactionData> nameTransactions = new ArrayList<>();
public int rebuildName(String name, Repository repository) {
return this.rebuildName(name, repository, null);
}
public int rebuildName(String name, Repository repository, List<String> referenceNames) {
// "referenceNames" tracks the linked names that have already been rebuilt, to prevent circular dependencies
if (referenceNames == null) {
referenceNames = new ArrayList<>();
}
int modificationCount = 0;
try {
List<TransactionData> transactions = this.fetchAllTransactionsInvolvingName(name, repository);
@@ -56,7 +65,14 @@ public class NamesDatabaseIntegrityCheck {
if (Objects.equals(updateNameTransactionData.getNewName(), name) &&
!Objects.equals(updateNameTransactionData.getName(), updateNameTransactionData.getNewName())) {
// This renames an existing name, so we need to process that instead
this.rebuildName(updateNameTransactionData.getName(), repository);
if (!referenceNames.contains(name)) {
referenceNames.add(name);
this.rebuildName(updateNameTransactionData.getName(), repository, referenceNames);
}
else {
// We've already processed this name so there's nothing more to do
}
}
else {
Name nameObj = new Name(repository, name);
@@ -193,7 +209,12 @@ public class NamesDatabaseIntegrityCheck {
newName = registeredName;
}
NameData newNameData = repository.getNameRepository().fromName(newName);
if (!Objects.equals(creator.getAddress(), newNameData.getOwner())) {
if (newNameData == null) {
LOGGER.info("Error: registered name {} has no new name data. This is likely due to account {} " +
"being renamed another time, which is a scenario that is not yet checked automatically.",
updateNameTransactionData.getNewName(), creator.getAddress());
}
else if (!Objects.equals(creator.getAddress(), newNameData.getOwner())) {
LOGGER.info("Error: registered name {} is owned by {}, but it should be {}",
updateNameTransactionData.getNewName(), newNameData.getOwner(), creator.getAddress());
integrityCheckFailed = true;
@@ -313,6 +334,10 @@ public class NamesDatabaseIntegrityCheck {
transactions.add(transactionData);
}
}
// Sort by lowest timestamp first
transactions.sort(Comparator.comparingLong(TransactionData::getTimestamp));
return transactions;
}

View File

@@ -0,0 +1,885 @@
package org.qortal.controller.tradebot;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bitcoinj.core.*;
import org.bitcoinj.script.Script.ScriptType;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.account.PublicKeyAccount;
import org.qortal.api.model.crosschain.TradeBotCreateRequest;
import org.qortal.asset.Asset;
import org.qortal.crosschain.*;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.MessageTransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.MessageTransaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.TransformationException;
import org.qortal.transform.transaction.DeployAtTransactionTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import static java.util.Arrays.stream;
import static java.util.stream.Collectors.toMap;
/**
* Performing cross-chain trading steps on behalf of user.
* <p>
* We deal with three different independent state-spaces here:
* <ul>
* <li>Qortal blockchain</li>
* <li>Foreign blockchain</li>
* <li>Trade-bot entries</li>
* </ul>
*/
public class RavencoinACCTv3TradeBot implements AcctTradeBot {
private static final Logger LOGGER = LogManager.getLogger(RavencoinACCTv3TradeBot.class);
public enum State implements TradeBot.StateNameAndValueSupplier {
BOB_WAITING_FOR_AT_CONFIRM(10, false, false),
BOB_WAITING_FOR_MESSAGE(15, true, true),
BOB_WAITING_FOR_AT_REDEEM(25, true, true),
BOB_DONE(30, false, false),
BOB_REFUNDED(35, false, false),
ALICE_WAITING_FOR_AT_LOCK(85, true, true),
ALICE_DONE(95, false, false),
ALICE_REFUNDING_A(105, true, true),
ALICE_REFUNDED(110, false, false);
private static final Map<Integer, State> map = stream(State.values()).collect(toMap(state -> state.value, state -> state));
public final int value;
public final boolean requiresAtData;
public final boolean requiresTradeData;
State(int value, boolean requiresAtData, boolean requiresTradeData) {
this.value = value;
this.requiresAtData = requiresAtData;
this.requiresTradeData = requiresTradeData;
}
public static State valueOf(int value) {
return map.get(value);
}
@Override
public String getState() {
return this.name();
}
@Override
public int getStateValue() {
return this.value;
}
}
/** Maximum time Bob waits for his AT creation transaction to be confirmed into a block. (milliseconds) */
private static final long MAX_AT_CONFIRMATION_PERIOD = 24 * 60 * 60 * 1000L; // ms
private static RavencoinACCTv3TradeBot instance;
private final List<String> endStates = Arrays.asList(State.BOB_DONE, State.BOB_REFUNDED, State.ALICE_DONE, State.ALICE_REFUNDING_A, State.ALICE_REFUNDED).stream()
.map(State::name)
.collect(Collectors.toUnmodifiableList());
private RavencoinACCTv3TradeBot() {
}
public static synchronized RavencoinACCTv3TradeBot getInstance() {
if (instance == null)
instance = new RavencoinACCTv3TradeBot();
return instance;
}
@Override
public List<String> getEndStates() {
return this.endStates;
}
/**
* Creates a new trade-bot entry from the "Bob" viewpoint, i.e. OFFERing QORT in exchange for RVN.
* <p>
* Generates:
* <ul>
* <li>new 'trade' private key</li>
* </ul>
* Derives:
* <ul>
* <li>'native' (as in Qortal) public key, public key hash, address (starting with Q)</li>
* <li>'foreign' (as in Ravencoin) public key, public key hash</li>
* </ul>
* A Qortal AT is then constructed including the following as constants in the 'data segment':
* <ul>
* <li>'native'/Qortal 'trade' address - used as a MESSAGE contact</li>
* <li>'foreign'/Ravencoin public key hash - used by Alice's P2SH scripts to allow redeem</li>
* <li>QORT amount on offer by Bob</li>
* <li>RVN amount expected in return by Bob (from Alice)</li>
* <li>trading timeout, in case things go wrong and everyone needs to refund</li>
* </ul>
* Returns a DEPLOY_AT transaction that needs to be signed and broadcast to the Qortal network.
* <p>
* Trade-bot will wait for Bob's AT to be deployed before taking next step.
* <p>
* @param repository
* @param tradeBotCreateRequest
* @return raw, unsigned DEPLOY_AT transaction
* @throws DataException
*/
public byte[] createTrade(Repository repository, TradeBotCreateRequest tradeBotCreateRequest) throws DataException {
byte[] tradePrivateKey = TradeBot.generateTradePrivateKey();
byte[] tradeNativePublicKey = TradeBot.deriveTradeNativePublicKey(tradePrivateKey);
byte[] tradeNativePublicKeyHash = Crypto.hash160(tradeNativePublicKey);
String tradeNativeAddress = Crypto.toAddress(tradeNativePublicKey);
byte[] tradeForeignPublicKey = TradeBot.deriveTradeForeignPublicKey(tradePrivateKey);
byte[] tradeForeignPublicKeyHash = Crypto.hash160(tradeForeignPublicKey);
// Convert Ravencoin receiving address into public key hash (we only support P2PKH at this time)
Address ravencoinReceivingAddress;
try {
ravencoinReceivingAddress = Address.fromString(Ravencoin.getInstance().getNetworkParameters(), tradeBotCreateRequest.receivingAddress);
} catch (AddressFormatException e) {
throw new DataException("Unsupported Ravencoin receiving address: " + tradeBotCreateRequest.receivingAddress);
}
if (ravencoinReceivingAddress.getOutputScriptType() != ScriptType.P2PKH)
throw new DataException("Unsupported Ravencoin receiving address: " + tradeBotCreateRequest.receivingAddress);
byte[] ravencoinReceivingAccountInfo = ravencoinReceivingAddress.getHash();
PublicKeyAccount creator = new PublicKeyAccount(repository, tradeBotCreateRequest.creatorPublicKey);
// Deploy AT
long timestamp = NTP.getTime();
byte[] reference = creator.getLastReference();
long fee = 0L;
byte[] signature = null;
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, Group.NO_GROUP, reference, creator.getPublicKey(), fee, signature);
String name = "QORT/RVN ACCT";
String description = "QORT/RVN cross-chain trade";
String aTType = "ACCT";
String tags = "ACCT QORT RVN";
byte[] creationBytes = RavencoinACCTv3.buildQortalAT(tradeNativeAddress, tradeForeignPublicKeyHash, tradeBotCreateRequest.qortAmount,
tradeBotCreateRequest.foreignAmount, tradeBotCreateRequest.tradeTimeout);
long amount = tradeBotCreateRequest.fundingQortAmount;
DeployAtTransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, aTType, tags, creationBytes, amount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
DeployAtTransaction.ensureATAddress(deployAtTransactionData);
String atAddress = deployAtTransactionData.getAtAddress();
TradeBotData tradeBotData = new TradeBotData(tradePrivateKey, RavencoinACCTv3.NAME,
State.BOB_WAITING_FOR_AT_CONFIRM.name(), State.BOB_WAITING_FOR_AT_CONFIRM.value,
creator.getAddress(), atAddress, timestamp, tradeBotCreateRequest.qortAmount,
tradeNativePublicKey, tradeNativePublicKeyHash, tradeNativeAddress,
null, null,
SupportedBlockchain.RAVENCOIN.name(),
tradeForeignPublicKey, tradeForeignPublicKeyHash,
tradeBotCreateRequest.foreignAmount, null, null, null, ravencoinReceivingAccountInfo);
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Built AT %s. Waiting for deployment", atAddress));
// Attempt to backup the trade bot data
TradeBot.backupTradeBotData(repository, null);
// Return to user for signing and broadcast as we don't have their Qortal private key
try {
return DeployAtTransactionTransformer.toBytes(deployAtTransactionData);
} catch (TransformationException e) {
throw new DataException("Failed to transform DEPLOY_AT transaction?", e);
}
}
/**
* Creates a trade-bot entry from the 'Alice' viewpoint, i.e. matching RVN to an existing offer.
* <p>
* Requires a chosen trade offer from Bob, passed by <tt>crossChainTradeData</tt>
* and access to a Ravencoin wallet via <tt>xprv58</tt>.
* <p>
* The <tt>crossChainTradeData</tt> contains the current trade offer state
* as extracted from the AT's data segment.
* <p>
* Access to a funded wallet is via a Ravencoin BIP32 hierarchical deterministic key,
* passed via <tt>xprv58</tt>.
* <b>This key will be stored in your node's database</b>
* to allow trade-bot to create/fund the necessary P2SH transactions!
* However, due to the nature of BIP32 keys, it is possible to give the trade-bot
* only a subset of wallet access (see BIP32 for more details).
* <p>
* As an example, the xprv58 can be extract from a <i>legacy, password-less</i>
* Electrum wallet by going to the console tab and entering:<br>
* <tt>wallet.keystore.xprv</tt><br>
* which should result in a base58 string starting with either 'xprv' (for Ravencoin main-net)
* or 'tprv' for (Ravencoin test-net).
* <p>
* It is envisaged that the value in <tt>xprv58</tt> will actually come from a Qortal-UI-managed wallet.
* <p>
* If sufficient funds are available, <b>this method will actually fund the P2SH-A</b>
* with the Ravencoin amount expected by 'Bob'.
* <p>
* If the Ravencoin transaction is successfully broadcast to the network then
* we also send a MESSAGE to Bob's trade-bot to let them know.
* <p>
* The trade-bot entry is saved to the repository and the cross-chain trading process commences.
* <p>
* @param repository
* @param crossChainTradeData chosen trade OFFER that Alice wants to match
* @param xprv58 funded wallet xprv in base58
* @return true if P2SH-A funding transaction successfully broadcast to Ravencoin network, false otherwise
* @throws DataException
*/
public ResponseResult startResponse(Repository repository, ATData atData, ACCT acct, CrossChainTradeData crossChainTradeData, String xprv58, String receivingAddress) throws DataException {
byte[] tradePrivateKey = TradeBot.generateTradePrivateKey();
byte[] secretA = TradeBot.generateSecret();
byte[] hashOfSecretA = Crypto.hash160(secretA);
byte[] tradeNativePublicKey = TradeBot.deriveTradeNativePublicKey(tradePrivateKey);
byte[] tradeNativePublicKeyHash = Crypto.hash160(tradeNativePublicKey);
String tradeNativeAddress = Crypto.toAddress(tradeNativePublicKey);
byte[] tradeForeignPublicKey = TradeBot.deriveTradeForeignPublicKey(tradePrivateKey);
byte[] tradeForeignPublicKeyHash = Crypto.hash160(tradeForeignPublicKey);
byte[] receivingPublicKeyHash = Base58.decode(receivingAddress); // Actually the whole address, not just PKH
// We need to generate lockTime-A: add tradeTimeout to now
long now = NTP.getTime();
int lockTimeA = crossChainTradeData.tradeTimeout * 60 + (int) (now / 1000L);
TradeBotData tradeBotData = new TradeBotData(tradePrivateKey, RavencoinACCTv3.NAME,
State.ALICE_WAITING_FOR_AT_LOCK.name(), State.ALICE_WAITING_FOR_AT_LOCK.value,
receivingAddress, crossChainTradeData.qortalAtAddress, now, crossChainTradeData.qortAmount,
tradeNativePublicKey, tradeNativePublicKeyHash, tradeNativeAddress,
secretA, hashOfSecretA,
SupportedBlockchain.RAVENCOIN.name(),
tradeForeignPublicKey, tradeForeignPublicKeyHash,
crossChainTradeData.expectedForeignAmount, xprv58, null, lockTimeA, receivingPublicKeyHash);
// Attempt to backup the trade bot data
// Include tradeBotData as an additional parameter, since it's not in the repository yet
TradeBot.backupTradeBotData(repository, Arrays.asList(tradeBotData));
// Check we have enough funds via xprv58 to fund P2SH to cover expectedForeignAmount
long p2shFee;
try {
p2shFee = Ravencoin.getInstance().getP2shFee(now);
} catch (ForeignBlockchainException e) {
LOGGER.debug("Couldn't estimate Ravencoin fees?");
return ResponseResult.NETWORK_ISSUE;
}
// Fee for redeem/refund is subtracted from P2SH-A balance.
// Do not include fee for funding transaction as this is covered by buildSpend()
long amountA = crossChainTradeData.expectedForeignAmount + p2shFee /*redeeming/refunding P2SH-A*/;
// P2SH-A to be funded
byte[] redeemScriptBytes = BitcoinyHTLC.buildScript(tradeForeignPublicKeyHash, lockTimeA, crossChainTradeData.creatorForeignPKH, hashOfSecretA);
String p2shAddress = Ravencoin.getInstance().deriveP2shAddress(redeemScriptBytes);
// Build transaction for funding P2SH-A
Transaction p2shFundingTransaction = Ravencoin.getInstance().buildSpend(tradeBotData.getForeignKey(), p2shAddress, amountA);
if (p2shFundingTransaction == null) {
LOGGER.debug("Unable to build P2SH-A funding transaction - lack of funds?");
return ResponseResult.BALANCE_ISSUE;
}
try {
Ravencoin.getInstance().broadcastTransaction(p2shFundingTransaction);
} catch (ForeignBlockchainException e) {
// We couldn't fund P2SH-A at this time
LOGGER.debug("Couldn't broadcast P2SH-A funding transaction?");
return ResponseResult.NETWORK_ISSUE;
}
// Attempt to send MESSAGE to Bob's Qortal trade address
byte[] messageData = RavencoinACCTv3.buildOfferMessage(tradeBotData.getTradeForeignPublicKeyHash(), tradeBotData.getHashOfSecret(), tradeBotData.getLockTimeA());
String messageRecipient = crossChainTradeData.qortalCreatorTradeAddress;
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, messageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to Bob's trade-bot %s: %s", messageRecipient, result.name()));
return ResponseResult.NETWORK_ISSUE;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, () -> String.format("Funding P2SH-A %s. Messaged Bob. Waiting for AT-lock", p2shAddress));
return ResponseResult.OK;
}
@Override
public boolean canDelete(Repository repository, TradeBotData tradeBotData) throws DataException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null)
return true;
// If the AT doesn't exist then we might as well let the user tidy up
if (!repository.getATRepository().exists(tradeBotData.getAtAddress()))
return true;
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
case ALICE_DONE:
case BOB_DONE:
case ALICE_REFUNDED:
case BOB_REFUNDED:
case ALICE_REFUNDING_A:
return true;
default:
return false;
}
}
@Override
public void progress(Repository repository, TradeBotData tradeBotData) throws DataException, ForeignBlockchainException {
State tradeBotState = State.valueOf(tradeBotData.getStateValue());
if (tradeBotState == null) {
LOGGER.info(() -> String.format("Trade-bot entry for AT %s has invalid state?", tradeBotData.getAtAddress()));
return;
}
ATData atData = null;
CrossChainTradeData tradeData = null;
if (tradeBotState.requiresAtData) {
// Attempt to fetch AT data
atData = repository.getATRepository().fromATAddress(tradeBotData.getAtAddress());
if (atData == null) {
LOGGER.debug(() -> String.format("Unable to fetch trade AT %s from repository", tradeBotData.getAtAddress()));
return;
}
if (tradeBotState.requiresTradeData) {
tradeData = RavencoinACCTv3.getInstance().populateTradeData(repository, atData);
if (tradeData == null) {
LOGGER.warn(() -> String.format("Unable to fetch ACCT trade data for AT %s from repository", tradeBotData.getAtAddress()));
return;
}
}
}
switch (tradeBotState) {
case BOB_WAITING_FOR_AT_CONFIRM:
handleBobWaitingForAtConfirm(repository, tradeBotData);
break;
case BOB_WAITING_FOR_MESSAGE:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleBobWaitingForMessage(repository, tradeBotData, atData, tradeData);
break;
case ALICE_WAITING_FOR_AT_LOCK:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleAliceWaitingForAtLock(repository, tradeBotData, atData, tradeData);
break;
case BOB_WAITING_FOR_AT_REDEEM:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleBobWaitingForAtRedeem(repository, tradeBotData, atData, tradeData);
break;
case ALICE_DONE:
case BOB_DONE:
break;
case ALICE_REFUNDING_A:
TradeBot.getInstance().updatePresence(repository, tradeBotData, tradeData);
handleAliceRefundingP2shA(repository, tradeBotData, atData, tradeData);
break;
case ALICE_REFUNDED:
case BOB_REFUNDED:
break;
}
}
/**
* Trade-bot is waiting for Bob's AT to deploy.
* <p>
* If AT is deployed, then trade-bot's next step is to wait for MESSAGE from Alice.
*/
private void handleBobWaitingForAtConfirm(Repository repository, TradeBotData tradeBotData) throws DataException {
if (!repository.getATRepository().exists(tradeBotData.getAtAddress())) {
if (NTP.getTime() - tradeBotData.getTimestamp() <= MAX_AT_CONFIRMATION_PERIOD)
return;
// We've waited ages for AT to be confirmed into a block but something has gone awry.
// After this long we assume transaction loss so give up with trade-bot entry too.
tradeBotData.setState(State.BOB_REFUNDED.name());
tradeBotData.setStateValue(State.BOB_REFUNDED.value);
tradeBotData.setTimestamp(NTP.getTime());
// We delete trade-bot entry here instead of saving, hence not using updateTradeBotState()
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
repository.saveChanges();
LOGGER.info(() -> String.format("AT %s never confirmed. Giving up on trade", tradeBotData.getAtAddress()));
TradeBot.notifyStateChange(tradeBotData);
return;
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_WAITING_FOR_MESSAGE,
() -> String.format("AT %s confirmed ready. Waiting for trade message", tradeBotData.getAtAddress()));
}
/**
* Trade-bot is waiting for MESSAGE from Alice's trade-bot, containing Alice's trade info.
* <p>
* It's possible Bob has cancelling his trade offer, receiving an automatic QORT refund,
* in which case trade-bot is done with this specific trade and finalizes on refunded state.
* <p>
* Assuming trade is still on offer, trade-bot checks the contents of MESSAGE from Alice's trade-bot.
* <p>
* Details from Alice are used to derive P2SH-A address and this is checked for funding balance.
* <p>
* Assuming P2SH-A has at least expected Ravencoin balance,
* Bob's trade-bot constructs a zero-fee, PoW MESSAGE to send to Bob's AT with more trade details.
* <p>
* On processing this MESSAGE, Bob's AT should switch into 'TRADE' mode and only trade with Alice.
* <p>
* Trade-bot's next step is to wait for Alice to redeem the AT, which will allow Bob to
* extract secret-A needed to redeem Alice's P2SH.
* @throws ForeignBlockchainException
*/
private void handleBobWaitingForMessage(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// If AT has finished then Bob likely cancelled his trade offer
if (atData.getIsFinished()) {
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_REFUNDED,
() -> String.format("AT %s cancelled - trading aborted", tradeBotData.getAtAddress()));
return;
}
Ravencoin ravencoin = Ravencoin.getInstance();
String address = tradeBotData.getTradeNativeAddress();
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(null, address, null, null, null);
for (MessageTransactionData messageTransactionData : messageTransactionsData) {
if (messageTransactionData.isText())
continue;
// We're expecting: HASH160(secret-A), Alice's Ravencoin pubkeyhash and lockTime-A
byte[] messageData = messageTransactionData.getData();
RavencoinACCTv3.OfferMessageData offerMessageData = RavencoinACCTv3.extractOfferMessageData(messageData);
if (offerMessageData == null)
continue;
byte[] aliceForeignPublicKeyHash = offerMessageData.partnerRavencoinPKH;
byte[] hashOfSecretA = offerMessageData.hashOfSecretA;
int lockTimeA = (int) offerMessageData.lockTimeA;
long messageTimestamp = messageTransactionData.getTimestamp();
int refundTimeout = RavencoinACCTv3.calcRefundTimeout(messageTimestamp, lockTimeA);
// Determine P2SH-A address and confirm funded
byte[] redeemScriptA = BitcoinyHTLC.buildScript(aliceForeignPublicKeyHash, lockTimeA, tradeBotData.getTradeForeignPublicKeyHash(), hashOfSecretA);
String p2shAddressA = ravencoin.deriveP2shAddress(redeemScriptA);
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Ravencoin.getInstance().getP2shFee(feeTimestamp);
final long minimumAmountA = tradeBotData.getForeignAmount() + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(ravencoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// There might be another MESSAGE from someone else with an actually funded P2SH-A...
continue;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// We've already redeemed this?
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_DONE,
() -> String.format("P2SH-A %s already spent? Assuming trade complete", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
// This P2SH-A is burnt, but there might be another MESSAGE from someone else with an actually funded P2SH-A...
continue;
case FUNDED:
// Fall-through out of switch...
break;
}
// Good to go - send MESSAGE to AT
String aliceNativeAddress = Crypto.toAddress(messageTransactionData.getCreatorPublicKey());
// Build outgoing message, padding each part to 32 bytes to make it easier for AT to consume
byte[] outgoingMessageData = RavencoinACCTv3.buildTradeMessage(aliceNativeAddress, aliceForeignPublicKeyHash, hashOfSecretA, lockTimeA, refundTimeout);
String messageRecipient = tradeBotData.getAtAddress();
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, outgoingMessageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction outgoingMessageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, outgoingMessageData, false, false);
outgoingMessageTransaction.computeNonce();
outgoingMessageTransaction.sign(sender);
// reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = outgoingMessageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to AT %s: %s", messageRecipient, result.name()));
return;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_WAITING_FOR_AT_REDEEM,
() -> String.format("Locked AT %s to %s. Waiting for AT redeem", tradeBotData.getAtAddress(), aliceNativeAddress));
return;
}
}
/**
* Trade-bot is waiting for Bob's AT to switch to TRADE mode and lock trade to Alice only.
* <p>
* It's possible that Bob has cancelled his trade offer in the mean time, or that somehow
* this process has taken so long that we've reached P2SH-A's locktime, or that someone else
* has managed to trade with Bob. In any of these cases, trade-bot switches to begin the refunding process.
* <p>
* Assuming Bob's AT is locked to Alice, trade-bot checks AT's state data to make sure it is correct.
* <p>
* If all is well, trade-bot then redeems AT using Alice's secret-A, releasing Bob's QORT to Alice.
* <p>
* In revealing a valid secret-A, Bob can then redeem the RVN funds from P2SH-A.
* <p>
* @throws ForeignBlockchainException
*/
private void handleAliceWaitingForAtLock(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
if (aliceUnexpectedState(repository, tradeBotData, atData, crossChainTradeData))
return;
Ravencoin ravencoin = Ravencoin.getInstance();
int lockTimeA = tradeBotData.getLockTimeA();
// Refund P2SH-A if we've passed lockTime-A
if (NTP.getTime() >= lockTimeA * 1000L) {
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTimeA, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = ravencoin.deriveP2shAddress(redeemScriptA);
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Ravencoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(ravencoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
case FUNDED:
break;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Already redeemed?
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("P2SH-A %s already spent? Assuming trade completed", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDED,
() -> String.format("P2SH-A %s already refunded. Trade aborted", p2shAddressA));
return;
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> atData.getIsFinished()
? String.format("AT %s cancelled. Refunding P2SH-A %s - aborting trade", tradeBotData.getAtAddress(), p2shAddressA)
: String.format("LockTime-A reached, refunding P2SH-A %s - aborting trade", p2shAddressA));
return;
}
// We're waiting for AT to be in TRADE mode
if (crossChainTradeData.mode != AcctMode.TRADING)
return;
// AT is in TRADE mode and locked to us as checked by aliceUnexpectedState() above
// Find our MESSAGE to AT from previous state
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(tradeBotData.getTradeNativePublicKey(),
crossChainTradeData.qortalCreatorTradeAddress, null, null, null);
if (messageTransactionsData == null || messageTransactionsData.isEmpty()) {
LOGGER.warn(() -> String.format("Unable to find our message to trade creator %s?", crossChainTradeData.qortalCreatorTradeAddress));
return;
}
long recipientMessageTimestamp = messageTransactionsData.get(0).getTimestamp();
int refundTimeout = RavencoinACCTv3.calcRefundTimeout(recipientMessageTimestamp, lockTimeA);
// Our calculated refundTimeout should match AT's refundTimeout
if (refundTimeout != crossChainTradeData.refundTimeout) {
LOGGER.debug(() -> String.format("Trade AT refundTimeout '%d' doesn't match our refundTimeout '%d'", crossChainTradeData.refundTimeout, refundTimeout));
// We'll eventually refund
return;
}
// We're good to redeem AT
// Send 'redeem' MESSAGE to AT using both secret
byte[] secretA = tradeBotData.getSecret();
String qortalReceivingAddress = Base58.encode(tradeBotData.getReceivingAccountInfo()); // Actually contains whole address, not just PKH
byte[] messageData = RavencoinACCTv3.buildRedeemMessage(secretA, qortalReceivingAddress);
String messageRecipient = tradeBotData.getAtAddress();
boolean isMessageAlreadySent = repository.getMessageRepository().exists(tradeBotData.getTradeNativePublicKey(), messageRecipient, messageData);
if (!isMessageAlreadySent) {
PrivateKeyAccount sender = new PrivateKeyAccount(repository, tradeBotData.getTradePrivateKey());
MessageTransaction messageTransaction = MessageTransaction.build(repository, sender, Group.NO_GROUP, messageRecipient, messageData, false, false);
messageTransaction.computeNonce();
messageTransaction.sign(sender);
// Reset repository state to prevent deadlock
repository.discardChanges();
ValidationResult result = messageTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK) {
LOGGER.warn(() -> String.format("Unable to send MESSAGE to AT %s: %s", messageRecipient, result.name()));
return;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("Redeeming AT %s. Funds should arrive at %s",
tradeBotData.getAtAddress(), qortalReceivingAddress));
}
/**
* Trade-bot is waiting for Alice to redeem Bob's AT, thus revealing secret-A which is required to spend the RVN funds from P2SH-A.
* <p>
* It's possible that Bob's AT has reached its trading timeout and automatically refunded QORT back to Bob. In which case,
* trade-bot is done with this specific trade and finalizes in refunded state.
* <p>
* Assuming trade-bot can extract a valid secret-A from Alice's MESSAGE then trade-bot uses that to redeem the RVN funds from P2SH-A
* to Bob's 'foreign'/Ravencoin trade legacy-format address, as derived from trade private key.
* <p>
* (This could potentially be 'improved' to send RVN to any address of Bob's choosing by changing the transaction output).
* <p>
* If trade-bot successfully broadcasts the transaction, then this specific trade is done.
* @throws ForeignBlockchainException
*/
private void handleBobWaitingForAtRedeem(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// AT should be 'finished' once Alice has redeemed QORT funds
if (!atData.getIsFinished())
// Not finished yet
return;
// If AT is REFUNDED or CANCELLED then something has gone wrong
if (crossChainTradeData.mode == AcctMode.REFUNDED || crossChainTradeData.mode == AcctMode.CANCELLED) {
// Alice hasn't redeemed the QORT, so there is no point in trying to redeem the RVN
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_REFUNDED,
() -> String.format("AT %s has auto-refunded - trade aborted", tradeBotData.getAtAddress()));
return;
}
byte[] secretA = RavencoinACCTv3.getInstance().findSecretA(repository, crossChainTradeData);
if (secretA == null) {
LOGGER.debug(() -> String.format("Unable to find secret-A from redeem message to AT %s?", tradeBotData.getAtAddress()));
return;
}
// Use secret-A to redeem P2SH-A
Ravencoin ravencoin = Ravencoin.getInstance();
byte[] receivingAccountInfo = tradeBotData.getReceivingAccountInfo();
int lockTimeA = crossChainTradeData.lockTimeA;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(crossChainTradeData.partnerForeignPKH, lockTimeA, crossChainTradeData.creatorForeignPKH, crossChainTradeData.hashOfSecretA);
String p2shAddressA = ravencoin.deriveP2shAddress(redeemScriptA);
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Ravencoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(ravencoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// P2SH-A suddenly not funded? Our best bet at this point is to hope for AT auto-refund
return;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Double-check that we have redeemed P2SH-A...
break;
case REFUND_IN_PROGRESS:
case REFUNDED:
// Wait for AT to auto-refund
return;
case FUNDED: {
Coin redeemAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey redeemKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = ravencoin.getUnspentOutputs(p2shAddressA);
Transaction p2shRedeemTransaction = BitcoinyHTLC.buildRedeemTransaction(ravencoin.getNetworkParameters(), redeemAmount, redeemKey,
fundingOutputs, redeemScriptA, secretA, receivingAccountInfo);
ravencoin.broadcastTransaction(p2shRedeemTransaction);
break;
}
}
String receivingAddress = ravencoin.pkhToAddress(receivingAccountInfo);
TradeBot.updateTradeBotState(repository, tradeBotData, State.BOB_DONE,
() -> String.format("P2SH-A %s redeemed. Funds should arrive at %s", tradeBotData.getAtAddress(), receivingAddress));
}
/**
* Trade-bot is attempting to refund P2SH-A.
* @throws ForeignBlockchainException
*/
private void handleAliceRefundingP2shA(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
int lockTimeA = tradeBotData.getLockTimeA();
// We can't refund P2SH-A until lockTime-A has passed
if (NTP.getTime() <= lockTimeA * 1000L)
return;
Ravencoin ravencoin = Ravencoin.getInstance();
// We can't refund P2SH-A until median block time has passed lockTime-A (see BIP113)
int medianBlockTime = ravencoin.getMedianBlockTime();
if (medianBlockTime <= lockTimeA)
return;
byte[] redeemScriptA = BitcoinyHTLC.buildScript(tradeBotData.getTradeForeignPublicKeyHash(), lockTimeA, crossChainTradeData.creatorForeignPKH, tradeBotData.getHashOfSecret());
String p2shAddressA = ravencoin.deriveP2shAddress(redeemScriptA);
// Fee for redeem/refund is subtracted from P2SH-A balance.
long feeTimestamp = calcFeeTimestamp(lockTimeA, crossChainTradeData.tradeTimeout);
long p2shFee = Ravencoin.getInstance().getP2shFee(feeTimestamp);
long minimumAmountA = crossChainTradeData.expectedForeignAmount + p2shFee;
BitcoinyHTLC.Status htlcStatusA = BitcoinyHTLC.determineHtlcStatus(ravencoin.getBlockchainProvider(), p2shAddressA, minimumAmountA);
switch (htlcStatusA) {
case UNFUNDED:
case FUNDING_IN_PROGRESS:
// Still waiting for P2SH-A to be funded...
return;
case REDEEM_IN_PROGRESS:
case REDEEMED:
// Too late!
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("P2SH-A %s already spent!", p2shAddressA));
return;
case REFUND_IN_PROGRESS:
case REFUNDED:
break;
case FUNDED:{
Coin refundAmount = Coin.valueOf(crossChainTradeData.expectedForeignAmount);
ECKey refundKey = ECKey.fromPrivate(tradeBotData.getTradePrivateKey());
List<TransactionOutput> fundingOutputs = ravencoin.getUnspentOutputs(p2shAddressA);
// Determine receive address for refund
String receiveAddress = ravencoin.getUnusedReceiveAddress(tradeBotData.getForeignKey());
Address receiving = Address.fromString(ravencoin.getNetworkParameters(), receiveAddress);
Transaction p2shRefundTransaction = BitcoinyHTLC.buildRefundTransaction(ravencoin.getNetworkParameters(), refundAmount, refundKey,
fundingOutputs, redeemScriptA, lockTimeA, receiving.getHash());
ravencoin.broadcastTransaction(p2shRefundTransaction);
break;
}
}
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDED,
() -> String.format("LockTime-A reached. Refunded P2SH-A %s. Trade aborted", p2shAddressA));
}
/**
* Returns true if Alice finds AT unexpectedly cancelled, refunded, redeemed or locked to someone else.
* <p>
* Will automatically update trade-bot state to <tt>ALICE_REFUNDING_A</tt> or <tt>ALICE_DONE</tt> as necessary.
*
* @throws DataException
* @throws ForeignBlockchainException
*/
private boolean aliceUnexpectedState(Repository repository, TradeBotData tradeBotData,
ATData atData, CrossChainTradeData crossChainTradeData) throws DataException, ForeignBlockchainException {
// This is OK
if (!atData.getIsFinished() && crossChainTradeData.mode == AcctMode.OFFERING)
return false;
boolean isAtLockedToUs = tradeBotData.getTradeNativeAddress().equals(crossChainTradeData.qortalPartnerAddress);
if (!atData.getIsFinished() && crossChainTradeData.mode == AcctMode.TRADING)
if (isAtLockedToUs) {
// AT is trading with us - OK
return false;
} else {
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> String.format("AT %s trading with someone else: %s. Refunding & aborting trade", tradeBotData.getAtAddress(), crossChainTradeData.qortalPartnerAddress));
return true;
}
if (atData.getIsFinished() && crossChainTradeData.mode == AcctMode.REDEEMED && isAtLockedToUs) {
// We've redeemed already?
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_DONE,
() -> String.format("AT %s already redeemed by us. Trade completed", tradeBotData.getAtAddress()));
} else {
// Any other state is not good, so start defensive refund
TradeBot.updateTradeBotState(repository, tradeBotData, State.ALICE_REFUNDING_A,
() -> String.format("AT %s cancelled/refunded/redeemed by someone else/invalid state. Refunding & aborting trade", tradeBotData.getAtAddress()));
}
return true;
}
private long calcFeeTimestamp(int lockTimeA, int tradeTimeout) {
return (lockTimeA - tradeTimeout * 60) * 1000L;
}
}

View File

@@ -2,16 +2,11 @@ package org.qortal.controller.tradebot;
import java.awt.TrayIcon.MessageType;
import java.security.SecureRandom;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.concurrent.locks.ReentrantLock;
import java.util.*;
import java.util.function.Supplier;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.util.Supplier;
import org.bitcoinj.core.ECKey;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.api.model.crosschain.TradeBotCreateRequest;
@@ -19,25 +14,26 @@ import org.qortal.controller.Controller;
import org.qortal.controller.Synchronizer;
import org.qortal.controller.tradebot.AcctTradeBot.ResponseResult;
import org.qortal.crosschain.*;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.PresenceTransactionData;
import org.qortal.data.network.TradePresenceData;
import org.qortal.event.Event;
import org.qortal.event.EventBus;
import org.qortal.event.Listener;
import org.qortal.group.Group;
import org.qortal.gui.SysTray;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.message.GetTradePresencesMessage;
import org.qortal.network.message.Message;
import org.qortal.network.message.TradePresencesMessage;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.hsqldb.HSQLDBImportExport;
import org.qortal.settings.Settings;
import org.qortal.transaction.PresenceTransaction;
import org.qortal.transaction.PresenceTransaction.PresenceType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.ByteArray;
import org.qortal.utils.NTP;
import com.google.common.primitives.Longs;
@@ -57,6 +53,15 @@ public class TradeBot implements Listener {
private static final Logger LOGGER = LogManager.getLogger(TradeBot.class);
private static final Random RANDOM = new SecureRandom();
/** Maximum lifetime of trade presence timestamp. 30 mins in ms. */
private static final long PRESENCE_LIFETIME = 30 * 60 * 1000L;
/** How soon before expiry of our own trade presence timestamp that we want to trigger renewal. 5 mins in ms. */
private static final long EARLY_RENEWAL_PERIOD = 5 * 60 * 1000L;
/** Trade presence timestamps are rounded up to this nearest interval. Bigger values improve grouping of entries in [GET_]TRADE_PRESENCES network messages. 15 mins in ms. */
private static final long EXPIRY_ROUNDING = 15 * 60 * 1000L;
/** How often we want to broadcast our list of all known trade presences to peers. 5 mins in ms. */
private static final long PRESENCE_BROADCAST_INTERVAL = 5 * 60 * 1000L;
public interface StateNameAndValueSupplier {
public String getState();
public int getStateValue();
@@ -74,6 +79,18 @@ public class TradeBot implements Listener {
}
}
public static class TradePresenceEvent implements Event {
private final TradePresenceData tradePresenceData;
public TradePresenceEvent(TradePresenceData tradePresenceData) {
this.tradePresenceData = tradePresenceData;
}
public TradePresenceData getTradePresenceData() {
return this.tradePresenceData;
}
}
private static final Map<Class<? extends ACCT>, Supplier<AcctTradeBot>> acctTradeBotSuppliers = new HashMap<>();
static {
acctTradeBotSuppliers.put(BitcoinACCTv1.class, BitcoinACCTv1TradeBot::getInstance);
@@ -83,11 +100,17 @@ public class TradeBot implements Listener {
acctTradeBotSuppliers.put(DogecoinACCTv1.class, DogecoinACCTv1TradeBot::getInstance);
acctTradeBotSuppliers.put(DogecoinACCTv2.class, DogecoinACCTv2TradeBot::getInstance);
acctTradeBotSuppliers.put(DogecoinACCTv3.class, DogecoinACCTv3TradeBot::getInstance);
acctTradeBotSuppliers.put(RavencoinACCTv3.class, RavencoinACCTv3TradeBot::getInstance);
}
private static TradeBot instance;
private final Map<String, Long> presenceTimestampsByAtAddress = Collections.synchronizedMap(new HashMap<>());
private final Map<ByteArray, Long> ourTradePresenceTimestampsByPubkey = Collections.synchronizedMap(new HashMap<>());
private final List<TradePresenceData> pendingTradePresences = Collections.synchronizedList(new ArrayList<>());
private final Map<ByteArray, TradePresenceData> allTradePresencesByPubkey = Collections.synchronizedMap(new HashMap<>());
private Map<ByteArray, TradePresenceData> safeAllTradePresencesByPubkey = Collections.emptyMap();
private long nextTradePresenceBroadcastTimestamp = 0L;
private TradeBot() {
EventBus.INSTANCE.addListener(event -> TradeBot.getInstance().listen(event));
@@ -217,7 +240,14 @@ public class TradeBot implements Listener {
if (!(event instanceof Synchronizer.NewChainTipEvent))
return;
// Don't process trade bots or broadcast presence timestamps if our chain is more than 30 minutes old
final Long minLatestBlockTimestamp = NTP.getTime() - (30 * 60 * 1000L);
if (!Controller.getInstance().isUpToDate(minLatestBlockTimestamp))
return;
synchronized (this) {
expireOldPresenceTimestamps();
List<TradeBotData> allTradeBotData;
try (final Repository repository = RepositoryManager.getRepository()) {
@@ -248,6 +278,8 @@ public class TradeBot implements Listener {
} catch (ForeignBlockchainException e) {
LOGGER.warn(() -> String.format("Foreign blockchain issue processing trade-bot entry for AT %s: %s", tradeBotData.getAtAddress(), e.getMessage()));
}
broadcastPresenceTimestamps();
}
}
@@ -325,6 +357,33 @@ public class TradeBot implements Listener {
}
// PRESENCE-related
public Collection<TradePresenceData> getAllTradePresences() {
return this.safeAllTradePresencesByPubkey.values();
}
/** Trade presence timestamps expire in the 'future' so any that reach 'now' have expired and are removed. */
private void expireOldPresenceTimestamps() {
long now = NTP.getTime();
int allRemovedCount = 0;
synchronized (this.allTradePresencesByPubkey) {
int preRemoveCount = this.allTradePresencesByPubkey.size();
this.allTradePresencesByPubkey.values().removeIf(tradePresenceData -> tradePresenceData.getTimestamp() <= now);
allRemovedCount = this.allTradePresencesByPubkey.size() - preRemoveCount;
}
int ourRemovedCount = 0;
synchronized (this.ourTradePresenceTimestampsByPubkey) {
int preRemoveCount = this.ourTradePresenceTimestampsByPubkey.size();
this.ourTradePresenceTimestampsByPubkey.values().removeIf(timestamp -> timestamp < now);
ourRemovedCount = this.ourTradePresenceTimestampsByPubkey.size() - preRemoveCount;
}
if (allRemovedCount > 0)
LOGGER.debug("Removed {} expired trade presences, of which {} ours", allRemovedCount, ourRemovedCount);
}
/*package*/ void updatePresence(Repository repository, TradeBotData tradeBotData, CrossChainTradeData tradeData)
throws DataException {
String atAddress = tradeBotData.getAtAddress();
@@ -333,44 +392,292 @@ public class TradeBot implements Listener {
String signerAddress = tradeNativeAccount.getAddress();
/*
* There's no point in Alice trying to build a PRESENCE transaction
* for an AT that isn't locked to her, as other peers won't be able
* to validate the PRESENCE transaction as signing public key won't
* be visible.
*/
if (!signerAddress.equals(tradeData.qortalCreatorTradeAddress) && !signerAddress.equals(tradeData.qortalPartnerAddress))
// Signer is neither Bob, nor Alice, or trade not yet locked to Alice
* There's no point in Alice trying to broadcast presence for an AT that isn't locked to her,
* as other peers won't be able to verify as signing public key isn't yet in the AT's data segment.
*/
if (!signerAddress.equals(tradeData.qortalCreatorTradeAddress) && !signerAddress.equals(tradeData.qortalPartnerAddress)) {
// Signer is neither Bob, nor trade locked to Alice
LOGGER.trace("Can't provide trade presence for our AT {} as it's not yet locked to Alice", atAddress);
return;
}
long now = NTP.getTime();
long threshold = now - PresenceType.TRADE_BOT.getLifetime();
long newExpiry = generateExpiry(now);
ByteArray pubkeyByteArray = ByteArray.wrap(tradeNativeAccount.getPublicKey());
long timestamp = presenceTimestampsByAtAddress.compute(atAddress, (k, v) -> (v == null || v < threshold) ? now : v);
// If map entry's timestamp is missing, or within early renewal period, use the new expiry - otherwise use existing timestamp.
synchronized (this.ourTradePresenceTimestampsByPubkey) {
Long currentTimestamp = this.ourTradePresenceTimestampsByPubkey.get(pubkeyByteArray);
// If timestamp hasn't been updated then nothing to do
if (timestamp != now)
if (currentTimestamp != null && currentTimestamp - now > EARLY_RENEWAL_PERIOD) {
// timestamp still good
LOGGER.trace("Current trade presence timestamp {} still good for our trade {}", currentTimestamp, atAddress);
return;
}
this.ourTradePresenceTimestampsByPubkey.put(pubkeyByteArray, newExpiry);
}
// Create signature
byte[] signature = tradeNativeAccount.sign(Longs.toByteArray(newExpiry));
// Add new trade presence to queue to be broadcast around network
TradePresenceData tradePresenceData = new TradePresenceData(newExpiry, tradeNativeAccount.getPublicKey(), signature, atAddress);
this.pendingTradePresences.add(tradePresenceData);
this.allTradePresencesByPubkey.put(pubkeyByteArray, tradePresenceData);
rebuildSafeAllTradePresences();
LOGGER.trace("New trade presence timestamp {} for our trade {}", newExpiry, atAddress);
EventBus.INSTANCE.notify(new TradePresenceEvent(tradePresenceData));
}
private void rebuildSafeAllTradePresences() {
synchronized (this.allTradePresencesByPubkey) {
// Collect into a *new* unmodifiable map.
this.safeAllTradePresencesByPubkey = Map.copyOf(this.allTradePresencesByPubkey);
}
}
private void broadcastPresenceTimestamps() {
// If we have new trade presences that are pending broadcast, send those as a priority
if (!this.pendingTradePresences.isEmpty()) {
// Create a copy for Network to safely use in another thread
List<TradePresenceData> safeTradePresences;
synchronized (this.pendingTradePresences) {
safeTradePresences = List.copyOf(this.pendingTradePresences);
this.pendingTradePresences.clear();
}
LOGGER.debug("Broadcasting {} new trade presences", safeTradePresences.size());
TradePresencesMessage tradePresencesMessage = new TradePresencesMessage(safeTradePresences);
Network.getInstance().broadcast(peer -> tradePresencesMessage);
return;
}
// As we have no new trade presences, check whether it's time to do a general broadcast
Long now = NTP.getTime();
if (now == null || now < nextTradePresenceBroadcastTimestamp)
return;
int txGroupId = Group.NO_GROUP;
byte[] reference = new byte[TransactionTransformer.SIGNATURE_LENGTH];
byte[] creatorPublicKey = tradeNativeAccount.getPublicKey();
long fee = 0L;
nextTradePresenceBroadcastTimestamp = now + PRESENCE_BROADCAST_INTERVAL;
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, null);
List<TradePresenceData> safeTradePresences = List.copyOf(this.safeAllTradePresencesByPubkey.values());
int nonce = 0;
byte[] timestampSignature = tradeNativeAccount.sign(Longs.toByteArray(timestamp));
if (safeTradePresences.isEmpty())
return;
PresenceTransactionData transactionData = new PresenceTransactionData(baseTransactionData, nonce, PresenceType.TRADE_BOT, timestampSignature);
LOGGER.debug("Broadcasting all {} known trade presences. Next broadcast timestamp: {}",
safeTradePresences.size(), nextTradePresenceBroadcastTimestamp
);
PresenceTransaction presenceTransaction = new PresenceTransaction(repository, transactionData);
presenceTransaction.computeNonce();
GetTradePresencesMessage getTradePresencesMessage = new GetTradePresencesMessage(safeTradePresences);
Network.getInstance().broadcast(peer -> getTradePresencesMessage);
}
presenceTransaction.sign(tradeNativeAccount);
// Network message processing
ValidationResult result = presenceTransaction.importAsUnconfirmed();
if (result != ValidationResult.OK)
LOGGER.debug(() -> String.format("Unable to build trade-bot PRESENCE transaction for %s: %s", tradeBotData.getAtAddress(), result.name()));
public void onGetTradePresencesMessage(Peer peer, Message message) {
GetTradePresencesMessage getTradePresencesMessage = (GetTradePresencesMessage) message;
List<TradePresenceData> peersTradePresences = getTradePresencesMessage.getTradePresences();
// Create mutable copy from safe snapshot
Map<ByteArray, TradePresenceData> entriesUnknownToPeer = new HashMap<>(this.safeAllTradePresencesByPubkey);
int knownCount = entriesUnknownToPeer.size();
for (TradePresenceData peersTradePresence : peersTradePresences) {
ByteArray pubkeyByteArray = ByteArray.wrap(peersTradePresence.getPublicKey());
TradePresenceData ourEntry = entriesUnknownToPeer.get(pubkeyByteArray);
if (ourEntry != null && ourEntry.getTimestamp() == peersTradePresence.getTimestamp())
entriesUnknownToPeer.remove(pubkeyByteArray);
}
if (entriesUnknownToPeer.isEmpty())
return;
LOGGER.debug("Sending {} trade presences to peer {} after excluding their {} from known {}",
entriesUnknownToPeer.size(), peer, peersTradePresences.size(), knownCount
);
// Send complement to peer
List<TradePresenceData> safeTradePresences = List.copyOf(entriesUnknownToPeer.values());
Message responseMessage = new TradePresencesMessage(safeTradePresences);
if (!peer.sendMessage(responseMessage)) {
peer.disconnect("failed to send TRADE_PRESENCES response");
return;
}
}
public void onTradePresencesMessage(Peer peer, Message message) {
TradePresencesMessage tradePresencesMessage = (TradePresencesMessage) message;
List<TradePresenceData> peersTradePresences = tradePresencesMessage.getTradePresences();
long now = NTP.getTime();
// Timestamps before this are too far into the past
long pastThreshold = now;
// Timestamps after this are too far into the future
long futureThreshold = now + PRESENCE_LIFETIME;
Map<ByteArray, Supplier<ACCT>> acctSuppliersByCodeHash = SupportedBlockchain.getAcctMap();
int newCount = 0;
try (final Repository repository = RepositoryManager.getRepository()) {
for (TradePresenceData peersTradePresence : peersTradePresences) {
long timestamp = peersTradePresence.getTimestamp();
// Ignore if timestamp is out of bounds
if (timestamp < pastThreshold || timestamp > futureThreshold) {
if (timestamp < pastThreshold)
LOGGER.trace("Ignoring trade presence {} from peer {} as timestamp {} is too old vs {}",
peersTradePresence.getAtAddress(), peer, timestamp, pastThreshold
);
else
LOGGER.trace("Ignoring trade presence {} from peer {} as timestamp {} is too new vs {}",
peersTradePresence.getAtAddress(), peer, timestamp, pastThreshold
);
continue;
}
ByteArray pubkeyByteArray = ByteArray.wrap(peersTradePresence.getPublicKey());
// Ignore if we've previously verified this timestamp+publickey combo or sent timestamp is older
TradePresenceData existingTradeData = this.safeAllTradePresencesByPubkey.get(pubkeyByteArray);
if (existingTradeData != null && timestamp <= existingTradeData.getTimestamp()) {
if (timestamp == existingTradeData.getTimestamp())
LOGGER.trace("Ignoring trade presence {} from peer {} as we have verified timestamp {} before",
peersTradePresence.getAtAddress(), peer, timestamp
);
else
LOGGER.trace("Ignoring trade presence {} from peer {} as timestamp {} is older than latest {}",
peersTradePresence.getAtAddress(), peer, timestamp, existingTradeData.getTimestamp()
);
continue;
}
// Check timestamp signature
byte[] timestampSignature = peersTradePresence.getSignature();
byte[] timestampBytes = Longs.toByteArray(timestamp);
byte[] publicKey = peersTradePresence.getPublicKey();
if (!Crypto.verify(publicKey, timestampSignature, timestampBytes)) {
LOGGER.trace("Ignoring trade presence {} from peer {} as signature failed to verify",
peersTradePresence.getAtAddress(), peer
);
continue;
}
ATData atData = repository.getATRepository().fromATAddress(peersTradePresence.getAtAddress());
if (atData == null || atData.getIsFrozen() || atData.getIsFinished()) {
if (atData == null)
LOGGER.trace("Ignoring trade presence {} from peer {} as AT doesn't exist",
peersTradePresence.getAtAddress(), peer
);
else
LOGGER.trace("Ignoring trade presence {} from peer {} as AT is frozen or finished",
peersTradePresence.getAtAddress(), peer
);
continue;
}
ByteArray atCodeHash = ByteArray.wrap(atData.getCodeHash());
Supplier<ACCT> acctSupplier = acctSuppliersByCodeHash.get(atCodeHash);
if (acctSupplier == null) {
LOGGER.trace("Ignoring trade presence {} from peer {} as AT isn't a known ACCT?",
peersTradePresence.getAtAddress(), peer
);
continue;
}
CrossChainTradeData tradeData = acctSupplier.get().populateTradeData(repository, atData);
if (tradeData == null) {
LOGGER.trace("Ignoring trade presence {} from peer {} as trade data not found?",
peersTradePresence.getAtAddress(), peer
);
continue;
}
// Convert signer's public key to address form
String signerAddress = peersTradePresence.getTradeAddress();
// Signer's public key (in address form) must match Bob's / Alice's trade public key (in address form)
if (!signerAddress.equals(tradeData.qortalCreatorTradeAddress) && !signerAddress.equals(tradeData.qortalPartnerAddress)) {
LOGGER.trace("Ignoring trade presence {} from peer {} as signer isn't Alice or Bob?",
peersTradePresence.getAtAddress(), peer
);
continue;
}
// This is new to us
this.allTradePresencesByPubkey.put(pubkeyByteArray, peersTradePresence);
++newCount;
LOGGER.trace("Added trade presence {} from peer {} with timestamp {}",
peersTradePresence.getAtAddress(), peer, timestamp
);
EventBus.INSTANCE.notify(new TradePresenceEvent(peersTradePresence));
}
} catch (DataException e) {
LOGGER.error("Couldn't process TRADE_PRESENCES message due to repository issue", e);
}
if (newCount > 0) {
LOGGER.debug("New trade presences: {}", newCount);
rebuildSafeAllTradePresences();
}
}
public void bridgePresence(long timestamp, byte[] publicKey, byte[] signature, String atAddress) {
long expiry = generateExpiry(timestamp);
ByteArray pubkeyByteArray = ByteArray.wrap(publicKey);
TradePresenceData fakeTradePresenceData = new TradePresenceData(expiry, publicKey, signature, atAddress);
// Only bridge if trade presence expiry timestamp is newer
TradePresenceData computedTradePresenceData = this.allTradePresencesByPubkey.compute(pubkeyByteArray, (k, v) ->
v == null || v.getTimestamp() < expiry ? fakeTradePresenceData : v
);
if (computedTradePresenceData == fakeTradePresenceData) {
LOGGER.trace("Bridged PRESENCE transaction for trade {} with timestamp {}", atAddress, expiry);
rebuildSafeAllTradePresences();
EventBus.INSTANCE.notify(new TradePresenceEvent(fakeTradePresenceData));
}
}
/** Decorates a CrossChainTradeData object with Alice / Bob trade-bot presence timestamp, if available. */
public void decorateTradeDataWithPresence(CrossChainTradeData crossChainTradeData) {
// Match by AT address, then check for Bob vs Alice
this.safeAllTradePresencesByPubkey.values().stream()
.filter(tradePresenceData -> tradePresenceData.getAtAddress().equals(crossChainTradeData.qortalAtAddress))
.forEach(tradePresenceData -> {
String signerAddress = tradePresenceData.getTradeAddress();
// Signer's public key (in address form) must match Bob's / Alice's trade public key (in address form)
if (signerAddress.equals(crossChainTradeData.qortalCreatorTradeAddress))
crossChainTradeData.creatorPresenceExpiry = tradePresenceData.getTimestamp();
else if (signerAddress.equals(crossChainTradeData.qortalPartnerAddress))
crossChainTradeData.partnerPresenceExpiry = tradePresenceData.getTimestamp();
});
}
private long generateExpiry(long timestamp) {
return ((timestamp - 1) / EXPIRY_ROUNDING) * EXPIRY_ROUNDING + PRESENCE_LIFETIME;
}
}

View File

@@ -42,30 +42,40 @@ public class Bitcoin extends Bitcoiny {
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("hodlers.beer", Server.ConnectionType.SSL, 50002),
new Server("btc.lastingcoin.net", Server.ConnectionType.SSL, 50002),
new Server("electrum.bitaroo.net", Server.ConnectionType.SSL, 50002),
new Server("bitcoin.grey.pw", Server.ConnectionType.SSL, 50002),
// Status verified at https://1209k.com/bitcoin-eye/ele.php?chain=btc
//CLOSED new Server("bitcoin.grey.pw", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("btc.litepay.ch", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("electrum.pabu.io", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("electrumx.hodlwallet.com", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("gd42.org", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("korea.electrum-server.com", Server.ConnectionType.SSL, 50002),
//CLOSED new Server("prospero.bitsrc.net", Server.ConnectionType.SSL, 50002),
//1.15.0 new Server("alviss.coinjoined.com", Server.ConnectionType.SSL, 50002),
//1.15.0 new Server("electrum.acinq.co", Server.ConnectionType.SSL, 50002),
//1.14.0 new Server("electrum.coinext.com.br", Server.ConnectionType.SSL, 50002),
new Server("104.248.139.211", Server.ConnectionType.SSL, 50002),
new Server("142.93.6.38", Server.ConnectionType.SSL, 50002),
new Server("157.245.172.236", Server.ConnectionType.SSL, 50002),
new Server("167.172.226.175", Server.ConnectionType.SSL, 50002),
new Server("167.172.42.31", Server.ConnectionType.SSL, 50002),
new Server("178.62.80.20", Server.ConnectionType.SSL, 50002),
new Server("185.64.116.15", Server.ConnectionType.SSL, 50002),
new Server("alviss.coinjoined.com", Server.ConnectionType.SSL, 50002),
new Server("btc.litepay.ch", Server.ConnectionType.SSL, 50002),
new Server("xtrum.com", Server.ConnectionType.SSL, 50002),
new Server("electrum.acinq.co", Server.ConnectionType.SSL, 50002),
new Server("caleb.vegas", Server.ConnectionType.SSL, 50002),
new Server("electrum.coinext.com.br", Server.ConnectionType.TCP, 50001),
new Server("korea.electrum-server.com", Server.ConnectionType.TCP, 50001),
new Server("eai.coincited.net", Server.ConnectionType.TCP, 50001),
new Server("electrum.coinext.com.br", Server.ConnectionType.SSL, 50002),
new Server("node1.btccuracao.com", Server.ConnectionType.SSL, 50002),
new Server("korea.electrum-server.com", Server.ConnectionType.SSL, 50002),
new Server("btce.iiiiiii.biz", Server.ConnectionType.SSL, 50002),
new Server("68.183.188.105", Server.ConnectionType.SSL, 50002),
new Server("bitcoin.lukechilds.co", Server.ConnectionType.SSL, 50002),
new Server("guichet.centure.cc", Server.ConnectionType.SSL, 50002),
new Server("electrumx.hodlwallet.com", Server.ConnectionType.SSL, 50002),
new Server("blkhub.net", Server.ConnectionType.SSL, 50002),
new Server("btc.lastingcoin.net", Server.ConnectionType.SSL, 50002),
new Server("btce.iiiiiii.biz", Server.ConnectionType.SSL, 50002),
new Server("caleb.vegas", Server.ConnectionType.SSL, 50002),
new Server("eai.coincited.net", Server.ConnectionType.SSL, 50002),
new Server("prospero.bitsrc.net", Server.ConnectionType.SSL, 50002),
new Server("gd42.org", Server.ConnectionType.SSL, 50002),
new Server("electrum.pabu.io", Server.ConnectionType.SSL, 50002));
new Server("electrum.bitaroo.net", Server.ConnectionType.SSL, 50002),
new Server("electrumx.dev", Server.ConnectionType.SSL, 50002),
new Server("elx.bitske.com", Server.ConnectionType.SSL, 50002),
new Server("fortress.qtornado.com", Server.ConnectionType.SSL, 50002),
new Server("guichet.centure.cc", Server.ConnectionType.SSL, 50002),
new Server("kareoke.qoppa.org", Server.ConnectionType.SSL, 50002),
new Server("hodlers.beer", Server.ConnectionType.SSL, 50002),
new Server("node1.btccuracao.com", Server.ConnectionType.SSL, 50002),
new Server("xtrum.com", Server.ConnectionType.SSL, 50002));
}
@Override

View File

@@ -45,9 +45,9 @@ public class Dogecoin extends Bitcoiny {
public Collection<Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("electrum1.cipig.net", ConnectionType.TCP, 10060),
new Server("electrum2.cipig.net", ConnectionType.TCP, 10060),
new Server("electrum3.cipig.net", ConnectionType.TCP, 10060));
new Server("electrum1.cipig.net", ConnectionType.SSL, 20060),
new Server("electrum2.cipig.net", ConnectionType.SSL, 20060),
new Server("electrum3.cipig.net", ConnectionType.SSL, 20060));
// TODO: add more mainnet servers. It's too centralized.
}

View File

@@ -44,23 +44,17 @@ public class Litecoin extends Bitcoiny {
public Collection<ElectrumX.Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
new Server("electrum-ltc.someguy123.net", Server.ConnectionType.SSL, 50002),
new Server("backup.electrum-ltc.org", Server.ConnectionType.TCP, 50001),
// Status verified at https://1209k.com/bitcoin-eye/ele.php?chain=ltc
//CLOSED new Server("electrum-ltc.petrkr.net", Server.ConnectionType.SSL, 60002),
//CLOSED new Server("electrum-ltc.someguy123.net", Server.ConnectionType.SSL, 50002),
//PHISHY new Server("electrum-ltc.bysh.me", Server.ConnectionType.SSL, 50002),
new Server("backup.electrum-ltc.org", Server.ConnectionType.SSL, 443),
new Server("electrum.ltc.xurious.com", Server.ConnectionType.TCP, 50001),
new Server("electrum.ltc.xurious.com", Server.ConnectionType.SSL, 50002),
new Server("electrum-ltc.bysh.me", Server.ConnectionType.SSL, 50002),
new Server("electrum1.cipig.net", Server.ConnectionType.SSL, 20063),
new Server("electrum2.cipig.net", Server.ConnectionType.SSL, 20063),
new Server("electrum3.cipig.net", Server.ConnectionType.SSL, 20063),
new Server("electrum3.cipig.net", ConnectionType.TCP, 10063),
new Server("electrum2.cipig.net", Server.ConnectionType.TCP, 10063),
new Server("electrum1.cipig.net", Server.ConnectionType.SSL, 20063),
new Server("electrum1.cipig.net", Server.ConnectionType.TCP, 10063),
new Server("electrum-ltc.petrkr.net", Server.ConnectionType.SSL, 60002),
new Server("ltc.litepay.ch", Server.ConnectionType.SSL, 50022),
new Server("electrum-ltc-bysh.me", Server.ConnectionType.TCP, 50002),
new Server("electrum.jochen-hoenicke.de", Server.ConnectionType.TCP, 50005),
new Server("node.ispol.sk", Server.ConnectionType.TCP, 50004));
new Server("ltc.rentonrisk.com", Server.ConnectionType.SSL, 50002));
}
@Override

View File

@@ -0,0 +1,175 @@
package org.qortal.crosschain;
import java.util.Arrays;
import java.util.Collection;
import java.util.EnumMap;
import java.util.Map;
import org.bitcoinj.core.Coin;
import org.bitcoinj.core.Context;
import org.bitcoinj.core.NetworkParameters;
import org.bitcoinj.params.RegTestParams;
import org.bitcoinj.params.TestNet3Params;
import org.libdohj.params.RavencoinMainNetParams;
import org.qortal.crosschain.ElectrumX.Server;
import org.qortal.crosschain.ElectrumX.Server.ConnectionType;
import org.qortal.settings.Settings;
public class Ravencoin extends Bitcoiny {
public static final String CURRENCY_CODE = "RVN";
private static final Coin DEFAULT_FEE_PER_KB = Coin.valueOf(1125000); // 0.01125 RVN per 1000 bytes
private static final long MINIMUM_ORDER_AMOUNT = 1000000; // 0.01 RVN minimum order, to avoid dust errors
// Temporary values until a dynamic fee system is written.
private static final long MAINNET_FEE = 1000000L;
private static final long NON_MAINNET_FEE = 1000000L; // enough for TESTNET3 and should be OK for REGTEST
private static final Map<ConnectionType, Integer> DEFAULT_ELECTRUMX_PORTS = new EnumMap<>(ConnectionType.class);
static {
DEFAULT_ELECTRUMX_PORTS.put(ConnectionType.TCP, 50001);
DEFAULT_ELECTRUMX_PORTS.put(ConnectionType.SSL, 50002);
}
public enum RavencoinNet {
MAIN {
@Override
public NetworkParameters getParams() {
return RavencoinMainNetParams.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(
// Servers chosen on NO BASIS WHATSOEVER from various sources!
// Status verified at https://1209k.com/bitcoin-eye/ele.php?chain=rvn
new Server("aethyn.com", ConnectionType.SSL, 50002),
new Server("electrum2.rvn.rocks", ConnectionType.SSL, 50002),
new Server("rvn-dashboard.com", ConnectionType.SSL, 50002),
new Server("rvn4lyfe.com", ConnectionType.SSL, 50002),
new Server("electrum1.cipig.net", ConnectionType.SSL, 20051),
new Server("electrum2.cipig.net", ConnectionType.SSL, 20051),
new Server("electrum3.cipig.net", ConnectionType.SSL, 20051));
}
@Override
public String getGenesisHash() {
return "0000006b444bc2f2ffe627be9d9e7e7a0730000870ef6eb6da46c8eae389df90";
}
@Override
public long getP2shFee(Long timestamp) {
// TODO: This will need to be replaced with something better in the near future!
return MAINNET_FEE;
}
},
TEST3 {
@Override
public NetworkParameters getParams() {
return TestNet3Params.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(); // TODO: find testnet servers
}
@Override
public String getGenesisHash() {
return "000000ecfc5e6324a079542221d00e10362bdc894d56500c414060eea8a3ad5a";
}
@Override
public long getP2shFee(Long timestamp) {
return NON_MAINNET_FEE;
}
},
REGTEST {
@Override
public NetworkParameters getParams() {
return RegTestParams.get();
}
@Override
public Collection<Server> getServers() {
return Arrays.asList(
new Server("localhost", ConnectionType.TCP, 50001),
new Server("localhost", ConnectionType.SSL, 50002));
}
@Override
public String getGenesisHash() {
// This is unique to each regtest instance
return null;
}
@Override
public long getP2shFee(Long timestamp) {
return NON_MAINNET_FEE;
}
};
public abstract NetworkParameters getParams();
public abstract Collection<Server> getServers();
public abstract String getGenesisHash();
public abstract long getP2shFee(Long timestamp) throws ForeignBlockchainException;
}
private static Ravencoin instance;
private final RavencoinNet ravencoinNet;
// Constructors and instance
private Ravencoin(RavencoinNet ravencoinNet, BitcoinyBlockchainProvider blockchain, Context bitcoinjContext, String currencyCode) {
super(blockchain, bitcoinjContext, currencyCode);
this.ravencoinNet = ravencoinNet;
LOGGER.info(() -> String.format("Starting Ravencoin support using %s", this.ravencoinNet.name()));
}
public static synchronized Ravencoin getInstance() {
if (instance == null) {
RavencoinNet ravencoinNet = Settings.getInstance().getRavencoinNet();
BitcoinyBlockchainProvider electrumX = new ElectrumX("Ravencoin-" + ravencoinNet.name(), ravencoinNet.getGenesisHash(), ravencoinNet.getServers(), DEFAULT_ELECTRUMX_PORTS);
Context bitcoinjContext = new Context(ravencoinNet.getParams());
instance = new Ravencoin(ravencoinNet, electrumX, bitcoinjContext, CURRENCY_CODE);
}
return instance;
}
// Getters & setters
public static synchronized void resetForTesting() {
instance = null;
}
// Actual useful methods for use by other classes
@Override
public Coin getFeePerKb() {
return DEFAULT_FEE_PER_KB;
}
@Override
public long getMinimumOrderAmount() {
return MINIMUM_ORDER_AMOUNT;
}
/**
* Returns estimated RVN fee, in sats per 1000bytes, optionally for historic timestamp.
*
* @param timestamp optional milliseconds since epoch, or null for 'now'
* @return sats per 1000bytes, or throws ForeignBlockchainException if something went wrong
*/
@Override
public long getP2shFee(Long timestamp) throws ForeignBlockchainException {
return this.ravencoinNet.getP2shFee(timestamp);
}
}

View File

@@ -0,0 +1,858 @@
package org.qortal.crosschain;
import com.google.common.hash.HashCode;
import com.google.common.primitives.Bytes;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.ciyam.at.*;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.at.QortalFunctionCode;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.data.crosschain.CrossChainTradeData;
import org.qortal.data.transaction.MessageTransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.utils.Base58;
import org.qortal.utils.BitTwiddling;
import java.nio.ByteBuffer;
import java.util.Arrays;
import java.util.List;
import static org.ciyam.at.OpCode.calcOffset;
/**
* Cross-chain trade AT
*
* <p>
* <ul>
* <li>Bob generates Ravencoin & Qortal 'trade' keys
* <ul>
* <li>private key required to sign P2SH redeem tx</li>
* <li>private key could be used to create 'secret' (e.g. double-SHA256)</li>
* <li>encrypted private key could be stored in Qortal AT for access by Bob from any node</li>
* </ul>
* </li>
* <li>Bob deploys Qortal AT
* <ul>
* </ul>
* </li>
* <li>Alice finds Qortal AT and wants to trade
* <ul>
* <li>Alice generates Ravencoin & Qortal 'trade' keys</li>
* <li>Alice funds Ravencoin P2SH-A</li>
* <li>Alice sends 'offer' MESSAGE to Bob from her Qortal trade address, containing:
* <ul>
* <li>hash-of-secret-A</li>
* <li>her 'trade' Ravencoin PKH</li>
* </ul>
* </li>
* </ul>
* </li>
* <li>Bob receives "offer" MESSAGE
* <ul>
* <li>Checks Alice's P2SH-A</li>
* <li>Sends 'trade' MESSAGE to Qortal AT from his trade address, containing:
* <ul>
* <li>Alice's trade Qortal address</li>
* <li>Alice's trade Ravencoin PKH</li>
* <li>hash-of-secret-A</li>
* </ul>
* </li>
* </ul>
* </li>
* <li>Alice checks Qortal AT to confirm it's locked to her
* <ul>
* <li>Alice sends 'redeem' MESSAGE to Qortal AT from her trade address, containing:
* <ul>
* <li>secret-A</li>
* <li>Qortal receiving address of her chosing</li>
* </ul>
* </li>
* <li>AT's QORT funds are sent to Qortal receiving address</li>
* </ul>
* </li>
* <li>Bob checks AT, extracts secret-A
* <ul>
* <li>Bob redeems P2SH-A using his Ravencoin trade key and secret-A</li>
* <li>P2SH-A RVN funds end up at Ravencoin address determined by redeem transaction output(s)</li>
* </ul>
* </li>
* </ul>
*/
public class RavencoinACCTv3 implements ACCT {
private static final Logger LOGGER = LogManager.getLogger(RavencoinACCTv3.class);
public static final String NAME = RavencoinACCTv3.class.getSimpleName();
public static final byte[] CODE_BYTES_HASH = HashCode.fromString("91395fa1ec0dfa35beddb0a7f4cc0a1bede157c38787ddb0af0cf03dfdc10f77").asBytes(); // SHA256 of AT code bytes
public static final int SECRET_LENGTH = 32;
/** <b>Value</b> offset into AT segment where 'mode' variable (long) is stored. (Multiply by MachineState.VALUE_SIZE for byte offset). */
private static final int MODE_VALUE_OFFSET = 61;
/** <b>Byte</b> offset into AT state data where 'mode' variable (long) is stored. */
public static final int MODE_BYTE_OFFSET = MachineState.HEADER_LENGTH + (MODE_VALUE_OFFSET * MachineState.VALUE_SIZE);
public static class OfferMessageData {
public byte[] partnerRavencoinPKH;
public byte[] hashOfSecretA;
public long lockTimeA;
}
public static final int OFFER_MESSAGE_LENGTH = 20 /*partnerRavencoinPKH*/ + 20 /*hashOfSecretA*/ + 8 /*lockTimeA*/;
public static final int TRADE_MESSAGE_LENGTH = 32 /*partner's Qortal trade address (padded from 25 to 32)*/
+ 24 /*partner's Ravencoin PKH (padded from 20 to 24)*/
+ 8 /*AT trade timeout (minutes)*/
+ 24 /*hash of secret-A (padded from 20 to 24)*/
+ 8 /*lockTimeA*/;
public static final int REDEEM_MESSAGE_LENGTH = 32 /*secret-A*/ + 32 /*partner's Qortal receiving address padded from 25 to 32*/;
public static final int CANCEL_MESSAGE_LENGTH = 32 /*AT creator's Qortal address*/;
private static RavencoinACCTv3 instance;
private RavencoinACCTv3() {
}
public static synchronized RavencoinACCTv3 getInstance() {
if (instance == null)
instance = new RavencoinACCTv3();
return instance;
}
@Override
public byte[] getCodeBytesHash() {
return CODE_BYTES_HASH;
}
@Override
public int getModeByteOffset() {
return MODE_BYTE_OFFSET;
}
@Override
public ForeignBlockchain getBlockchain() {
return Ravencoin.getInstance();
}
/**
* Returns Qortal AT creation bytes for cross-chain trading AT.
* <p>
* <tt>tradeTimeout</tt> (minutes) is the time window for the trade partner to send the
* 32-byte secret to the AT, before the AT automatically refunds the AT's creator.
*
* @param creatorTradeAddress AT creator's trade Qortal address
* @param ravencoinPublicKeyHash 20-byte HASH160 of creator's trade Ravencoin public key
* @param qortAmount how much QORT to pay trade partner if they send correct 32-byte secrets to AT
* @param ravencoinAmount how much RVN the AT creator is expecting to trade
* @param tradeTimeout suggested timeout for entire trade
*/
public static byte[] buildQortalAT(String creatorTradeAddress, byte[] ravencoinPublicKeyHash, long qortAmount, long ravencoinAmount, int tradeTimeout) {
if (ravencoinPublicKeyHash.length != 20)
throw new IllegalArgumentException("Ravencoin public key hash should be 20 bytes");
// Labels for data segment addresses
int addrCounter = 0;
// Constants (with corresponding dataByteBuffer.put*() calls below)
final int addrCreatorTradeAddress1 = addrCounter++;
final int addrCreatorTradeAddress2 = addrCounter++;
final int addrCreatorTradeAddress3 = addrCounter++;
final int addrCreatorTradeAddress4 = addrCounter++;
final int addrRavencoinPublicKeyHash = addrCounter;
addrCounter += 4;
final int addrQortAmount = addrCounter++;
final int addrRavencoinAmount = addrCounter++;
final int addrTradeTimeout = addrCounter++;
final int addrMessageTxnType = addrCounter++;
final int addrExpectedTradeMessageLength = addrCounter++;
final int addrExpectedRedeemMessageLength = addrCounter++;
final int addrCreatorAddressPointer = addrCounter++;
final int addrQortalPartnerAddressPointer = addrCounter++;
final int addrMessageSenderPointer = addrCounter++;
final int addrTradeMessagePartnerRavencoinPKHOffset = addrCounter++;
final int addrPartnerRavencoinPKHPointer = addrCounter++;
final int addrTradeMessageHashOfSecretAOffset = addrCounter++;
final int addrHashOfSecretAPointer = addrCounter++;
final int addrRedeemMessageReceivingAddressOffset = addrCounter++;
final int addrMessageDataPointer = addrCounter++;
final int addrMessageDataLength = addrCounter++;
final int addrPartnerReceivingAddressPointer = addrCounter++;
final int addrEndOfConstants = addrCounter;
// Variables
final int addrCreatorAddress1 = addrCounter++;
final int addrCreatorAddress2 = addrCounter++;
final int addrCreatorAddress3 = addrCounter++;
final int addrCreatorAddress4 = addrCounter++;
final int addrQortalPartnerAddress1 = addrCounter++;
final int addrQortalPartnerAddress2 = addrCounter++;
final int addrQortalPartnerAddress3 = addrCounter++;
final int addrQortalPartnerAddress4 = addrCounter++;
final int addrLockTimeA = addrCounter++;
final int addrRefundTimeout = addrCounter++;
final int addrRefundTimestamp = addrCounter++;
final int addrLastTxnTimestamp = addrCounter++;
final int addrBlockTimestamp = addrCounter++;
final int addrTxnType = addrCounter++;
final int addrResult = addrCounter++;
final int addrMessageSender1 = addrCounter++;
final int addrMessageSender2 = addrCounter++;
final int addrMessageSender3 = addrCounter++;
final int addrMessageSender4 = addrCounter++;
final int addrMessageLength = addrCounter++;
final int addrMessageData = addrCounter;
addrCounter += 4;
final int addrHashOfSecretA = addrCounter;
addrCounter += 4;
final int addrPartnerRavencoinPKH = addrCounter;
addrCounter += 4;
final int addrPartnerReceivingAddress = addrCounter;
addrCounter += 4;
final int addrMode = addrCounter++;
assert addrMode == MODE_VALUE_OFFSET : String.format("addrMode %d does not match MODE_VALUE_OFFSET %d", addrMode, MODE_VALUE_OFFSET);
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
// AT creator's trade Qortal address, decoded from Base58
assert dataByteBuffer.position() == addrCreatorTradeAddress1 * MachineState.VALUE_SIZE : "addrCreatorTradeAddress1 incorrect";
byte[] creatorTradeAddressBytes = Base58.decode(creatorTradeAddress);
dataByteBuffer.put(Bytes.ensureCapacity(creatorTradeAddressBytes, 32, 0));
// Ravencoin public key hash
assert dataByteBuffer.position() == addrRavencoinPublicKeyHash * MachineState.VALUE_SIZE : "addrRavencoinPublicKeyHash incorrect";
dataByteBuffer.put(Bytes.ensureCapacity(ravencoinPublicKeyHash, 32, 0));
// Redeem Qort amount
assert dataByteBuffer.position() == addrQortAmount * MachineState.VALUE_SIZE : "addrQortAmount incorrect";
dataByteBuffer.putLong(qortAmount);
// Expected Ravencoin amount
assert dataByteBuffer.position() == addrRavencoinAmount * MachineState.VALUE_SIZE : "addrRavencoinAmount incorrect";
dataByteBuffer.putLong(ravencoinAmount);
// Suggested trade timeout (minutes)
assert dataByteBuffer.position() == addrTradeTimeout * MachineState.VALUE_SIZE : "addrTradeTimeout incorrect";
dataByteBuffer.putLong(tradeTimeout);
// We're only interested in MESSAGE transactions
assert dataByteBuffer.position() == addrMessageTxnType * MachineState.VALUE_SIZE : "addrMessageTxnType incorrect";
dataByteBuffer.putLong(API.ATTransactionType.MESSAGE.value);
// Expected length of 'trade' MESSAGE data from AT creator
assert dataByteBuffer.position() == addrExpectedTradeMessageLength * MachineState.VALUE_SIZE : "addrExpectedTradeMessageLength incorrect";
dataByteBuffer.putLong(TRADE_MESSAGE_LENGTH);
// Expected length of 'redeem' MESSAGE data from trade partner
assert dataByteBuffer.position() == addrExpectedRedeemMessageLength * MachineState.VALUE_SIZE : "addrExpectedRedeemMessageLength incorrect";
dataByteBuffer.putLong(REDEEM_MESSAGE_LENGTH);
// Index into data segment of AT creator's address, used by GET_B_IND
assert dataByteBuffer.position() == addrCreatorAddressPointer * MachineState.VALUE_SIZE : "addrCreatorAddressPointer incorrect";
dataByteBuffer.putLong(addrCreatorAddress1);
// Index into data segment of partner's Qortal address, used by SET_B_IND
assert dataByteBuffer.position() == addrQortalPartnerAddressPointer * MachineState.VALUE_SIZE : "addrQortalPartnerAddressPointer incorrect";
dataByteBuffer.putLong(addrQortalPartnerAddress1);
// Index into data segment of (temporary) transaction's sender's address, used by GET_B_IND
assert dataByteBuffer.position() == addrMessageSenderPointer * MachineState.VALUE_SIZE : "addrMessageSenderPointer incorrect";
dataByteBuffer.putLong(addrMessageSender1);
// Offset into 'trade' MESSAGE data payload for extracting partner's Ravencoin PKH
assert dataByteBuffer.position() == addrTradeMessagePartnerRavencoinPKHOffset * MachineState.VALUE_SIZE : "addrTradeMessagePartnerRavencoinPKHOffset incorrect";
dataByteBuffer.putLong(32L);
// Index into data segment of partner's Ravencoin PKH, used by GET_B_IND
assert dataByteBuffer.position() == addrPartnerRavencoinPKHPointer * MachineState.VALUE_SIZE : "addrPartnerRavencoinPKHPointer incorrect";
dataByteBuffer.putLong(addrPartnerRavencoinPKH);
// Offset into 'trade' MESSAGE data payload for extracting hash-of-secret-A
assert dataByteBuffer.position() == addrTradeMessageHashOfSecretAOffset * MachineState.VALUE_SIZE : "addrTradeMessageHashOfSecretAOffset incorrect";
dataByteBuffer.putLong(64L);
// Index into data segment to hash of secret A, used by GET_B_IND
assert dataByteBuffer.position() == addrHashOfSecretAPointer * MachineState.VALUE_SIZE : "addrHashOfSecretAPointer incorrect";
dataByteBuffer.putLong(addrHashOfSecretA);
// Offset into 'redeem' MESSAGE data payload for extracting Qortal receiving address
assert dataByteBuffer.position() == addrRedeemMessageReceivingAddressOffset * MachineState.VALUE_SIZE : "addrRedeemMessageReceivingAddressOffset incorrect";
dataByteBuffer.putLong(32L);
// Source location and length for hashing any passed secret
assert dataByteBuffer.position() == addrMessageDataPointer * MachineState.VALUE_SIZE : "addrMessageDataPointer incorrect";
dataByteBuffer.putLong(addrMessageData);
assert dataByteBuffer.position() == addrMessageDataLength * MachineState.VALUE_SIZE : "addrMessageDataLength incorrect";
dataByteBuffer.putLong(32L);
// Pointer into data segment of where to save partner's receiving Qortal address, used by GET_B_IND
assert dataByteBuffer.position() == addrPartnerReceivingAddressPointer * MachineState.VALUE_SIZE : "addrPartnerReceivingAddressPointer incorrect";
dataByteBuffer.putLong(addrPartnerReceivingAddress);
assert dataByteBuffer.position() == addrEndOfConstants * MachineState.VALUE_SIZE : "dataByteBuffer position not at end of constants";
// Code labels
Integer labelRefund = null;
Integer labelTradeTxnLoop = null;
Integer labelCheckTradeTxn = null;
Integer labelCheckCancelTxn = null;
Integer labelNotTradeNorCancelTxn = null;
Integer labelCheckNonRefundTradeTxn = null;
Integer labelTradeTxnExtract = null;
Integer labelRedeemTxnLoop = null;
Integer labelCheckRedeemTxn = null;
Integer labelCheckRedeemTxnSender = null;
Integer labelPayout = null;
ByteBuffer codeByteBuffer = ByteBuffer.allocate(768);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
/* Initialization */
// Use AT creation 'timestamp' as starting point for finding transactions sent to AT
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_CREATION_TIMESTAMP, addrLastTxnTimestamp));
// Load B register with AT creator's address so we can save it into addrCreatorAddress1-4
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_CREATOR_INTO_B));
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrCreatorAddressPointer));
/* NOP - to ensure RAVENCOIN ACCT is unique */
codeByteBuffer.put(OpCode.NOP.compile());
// Set restart position to after this opcode
codeByteBuffer.put(OpCode.SET_PCS.compile());
/* Loop, waiting for message from AT creator's trade address containing trade partner details, or AT owner's address to cancel offer */
/* Transaction processing loop */
labelTradeTxnLoop = codeByteBuffer.position();
/* Sleep until message arrives */
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.SLEEP_UNTIL_MESSAGE.value, addrLastTxnTimestamp));
// Find next transaction (if any) to this AT since the last one (referenced by addrLastTxnTimestamp)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PUT_TX_AFTER_TIMESTAMP_INTO_A, addrLastTxnTimestamp));
// If no transaction found, A will be zero. If A is zero, set addrResult to 1, otherwise 0.
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.CHECK_A_IS_ZERO, addrResult));
// If addrResult is zero (i.e. A is non-zero, transaction was found) then go check transaction
codeByteBuffer.put(OpCode.BZR_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelCheckTradeTxn)));
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
/* Check transaction */
labelCheckTradeTxn = codeByteBuffer.position();
// Update our 'last found transaction's timestamp' using 'timestamp' from transaction
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TIMESTAMP_FROM_TX_IN_A, addrLastTxnTimestamp));
// Extract transaction type (message/payment) from transaction and save type in addrTxnType
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TYPE_FROM_TX_IN_A, addrTxnType));
// If transaction type is not MESSAGE type then go look for another transaction
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrTxnType, addrMessageTxnType, calcOffset(codeByteBuffer, labelTradeTxnLoop)));
/* Check transaction's sender. We're expecting AT creator's trade address for 'trade' message, or AT creator's own address for 'cancel' message. */
// Extract sender address from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_ADDRESS_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageSender1 (as pointed to by addrMessageSenderPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageSenderPointer));
// Compare each part of message sender's address with AT creator's trade address. If they don't match, check for cancel situation.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrCreatorTradeAddress1, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrCreatorTradeAddress2, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrCreatorTradeAddress3, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrCreatorTradeAddress4, calcOffset(codeByteBuffer, labelCheckCancelTxn)));
// Message sender's address matches AT creator's trade address so go process 'trade' message
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelCheckNonRefundTradeTxn == null ? 0 : labelCheckNonRefundTradeTxn));
/* Checking message sender for possible cancel message */
labelCheckCancelTxn = codeByteBuffer.position();
// Compare each part of message sender's address with AT creator's address. If they don't match, look for another transaction.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrCreatorAddress1, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrCreatorAddress2, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrCreatorAddress3, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrCreatorAddress4, calcOffset(codeByteBuffer, labelNotTradeNorCancelTxn)));
// Partner address is AT creator's address, so cancel offer and finish.
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.CANCELLED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
/* Not trade nor cancel message */
labelNotTradeNorCancelTxn = codeByteBuffer.position();
// Loop to find another transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelTradeTxnLoop == null ? 0 : labelTradeTxnLoop));
/* Possible switch-to-trade-mode message */
labelCheckNonRefundTradeTxn = codeByteBuffer.position();
// Check 'trade' message we received has expected number of message bytes
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(QortalFunctionCode.GET_MESSAGE_LENGTH_FROM_TX_IN_A.value, addrMessageLength));
// If message length matches, branch to info extraction code
codeByteBuffer.put(OpCode.BEQ_DAT.compile(addrMessageLength, addrExpectedTradeMessageLength, calcOffset(codeByteBuffer, labelTradeTxnExtract)));
// Message length didn't match - go back to finding another 'trade' MESSAGE transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelTradeTxnLoop == null ? 0 : labelTradeTxnLoop));
/* Extracting info from 'trade' MESSAGE transaction */
labelTradeTxnExtract = codeByteBuffer.position();
// Extract message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_MESSAGE_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrQortalPartnerAddress1 (as pointed to by addrQortalPartnerAddressPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrQortalPartnerAddressPointer));
// Extract trade partner's Ravencoin public key hash (PKH) from message into B
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrTradeMessagePartnerRavencoinPKHOffset));
// Store partner's Ravencoin PKH (we only really use values from B1-B3)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrPartnerRavencoinPKHPointer));
// Extract AT trade timeout (minutes) (from B4)
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_B4, addrRefundTimeout));
// Grab next 32 bytes
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrTradeMessageHashOfSecretAOffset));
// Extract hash-of-secret-A (we only really use values from B1-B3)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrHashOfSecretAPointer));
// Extract lockTime-A (from B4)
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_B4, addrLockTimeA));
// Calculate trade timeout refund 'timestamp' by adding addrRefundTimeout minutes to this transaction's 'timestamp', then save into addrRefundTimestamp
codeByteBuffer.put(OpCode.EXT_FUN_RET_DAT_2.compile(FunctionCode.ADD_MINUTES_TO_TIMESTAMP, addrRefundTimestamp, addrLastTxnTimestamp, addrRefundTimeout));
/* We are in 'trade mode' */
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.TRADING.value));
// Set restart position to after this opcode
codeByteBuffer.put(OpCode.SET_PCS.compile());
/* Loop, waiting for trade timeout or 'redeem' MESSAGE from Qortal trade partner */
// Fetch current block 'timestamp'
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_BLOCK_TIMESTAMP, addrBlockTimestamp));
// If we're not past refund 'timestamp' then look for next transaction
codeByteBuffer.put(OpCode.BLT_DAT.compile(addrBlockTimestamp, addrRefundTimestamp, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
// We're past refund 'timestamp' so go refund everything back to AT creator
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRefund == null ? 0 : labelRefund));
/* Transaction processing loop */
labelRedeemTxnLoop = codeByteBuffer.position();
// Find next transaction to this AT since the last one (if any)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PUT_TX_AFTER_TIMESTAMP_INTO_A, addrLastTxnTimestamp));
// If no transaction found, A will be zero. If A is zero, set addrComparator to 1, otherwise 0.
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.CHECK_A_IS_ZERO, addrResult));
// If addrResult is zero (i.e. A is non-zero, transaction was found) then go check transaction
codeByteBuffer.put(OpCode.BZR_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelCheckRedeemTxn)));
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
/* Check transaction */
labelCheckRedeemTxn = codeByteBuffer.position();
// Update our 'last found transaction's timestamp' using 'timestamp' from transaction
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TIMESTAMP_FROM_TX_IN_A, addrLastTxnTimestamp));
// Extract transaction type (message/payment) from transaction and save type in addrTxnType
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(FunctionCode.GET_TYPE_FROM_TX_IN_A, addrTxnType));
// If transaction type is not MESSAGE type then go look for another transaction
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrTxnType, addrMessageTxnType, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
/* Check message payload length */
codeByteBuffer.put(OpCode.EXT_FUN_RET.compile(QortalFunctionCode.GET_MESSAGE_LENGTH_FROM_TX_IN_A.value, addrMessageLength));
// If message length matches, branch to sender checking code
codeByteBuffer.put(OpCode.BEQ_DAT.compile(addrMessageLength, addrExpectedRedeemMessageLength, calcOffset(codeByteBuffer, labelCheckRedeemTxnSender)));
// Message length didn't match - go back to finding another 'redeem' MESSAGE transaction
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRedeemTxnLoop == null ? 0 : labelRedeemTxnLoop));
/* Check transaction's sender */
labelCheckRedeemTxnSender = codeByteBuffer.position();
// Extract sender address from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_ADDRESS_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageSender1 (as pointed to by addrMessageSenderPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageSenderPointer));
// Compare each part of transaction's sender's address with expected address. If they don't match, look for another transaction.
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender1, addrQortalPartnerAddress1, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender2, addrQortalPartnerAddress2, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender3, addrQortalPartnerAddress3, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
codeByteBuffer.put(OpCode.BNE_DAT.compile(addrMessageSender4, addrQortalPartnerAddress4, calcOffset(codeByteBuffer, labelRedeemTxnLoop)));
/* Check 'secret-A' in transaction's message */
// Extract secret-A from first 32 bytes of message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN.compile(FunctionCode.PUT_MESSAGE_FROM_TX_IN_A_INTO_B));
// Save B register into data segment starting at addrMessageData (as pointed to by addrMessageDataPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrMessageDataPointer));
// Load B register with expected hash result (as pointed to by addrHashOfSecretAPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.SET_B_IND, addrHashOfSecretAPointer));
// Perform HASH160 using source data at addrMessageData. (Location and length specified via addrMessageDataPointer and addrMessageDataLength).
// Save the equality result (1 if they match, 0 otherwise) into addrResult.
codeByteBuffer.put(OpCode.EXT_FUN_RET_DAT_2.compile(FunctionCode.CHECK_HASH160_WITH_B, addrResult, addrMessageDataPointer, addrMessageDataLength));
// If hashes don't match, addrResult will be zero so go find another transaction
codeByteBuffer.put(OpCode.BNZ_DAT.compile(addrResult, calcOffset(codeByteBuffer, labelPayout)));
codeByteBuffer.put(OpCode.JMP_ADR.compile(labelRedeemTxnLoop == null ? 0 : labelRedeemTxnLoop));
/* Success! Pay arranged amount to receiving address */
labelPayout = codeByteBuffer.position();
// Extract Qortal receiving address from next 32 bytes of message from transaction into B register
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(QortalFunctionCode.PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.value, addrRedeemMessageReceivingAddressOffset));
// Save B register into data segment starting at addrPartnerReceivingAddress (as pointed to by addrPartnerReceivingAddressPointer)
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.GET_B_IND, addrPartnerReceivingAddressPointer));
// Pay AT's balance to receiving address
codeByteBuffer.put(OpCode.EXT_FUN_DAT.compile(FunctionCode.PAY_TO_ADDRESS_IN_B, addrQortAmount));
// Set redeemed mode
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.REDEEMED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
// Fall-through to refunding any remaining balance back to AT creator
/* Refund balance back to AT creator */
labelRefund = codeByteBuffer.position();
// Set refunded mode
codeByteBuffer.put(OpCode.SET_VAL.compile(addrMode, AcctMode.REFUNDED.value));
// We're finished forever (finishing auto-refunds remaining balance to AT creator)
codeByteBuffer.put(OpCode.FIN_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile RVN-QORT ACCT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
assert Arrays.equals(Crypto.digest(codeBytes), RavencoinACCTv3.CODE_BYTES_HASH)
: String.format("BTCACCT.CODE_BYTES_HASH mismatch: expected %s, actual %s", HashCode.fromBytes(CODE_BYTES_HASH), HashCode.fromBytes(Crypto.digest(codeBytes)));
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
@Override
public CrossChainTradeData populateTradeData(Repository repository, ATData atData) throws DataException {
ATStateData atStateData = repository.getATRepository().getLatestATState(atData.getATAddress());
return populateTradeData(repository, atData.getCreatorPublicKey(), atData.getCreation(), atStateData);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
@Override
public CrossChainTradeData populateTradeData(Repository repository, ATStateData atStateData) throws DataException {
ATData atData = repository.getATRepository().fromATAddress(atStateData.getATAddress());
return populateTradeData(repository, atData.getCreatorPublicKey(), atData.getCreation(), atStateData);
}
/**
* Returns CrossChainTradeData with useful info extracted from AT.
*/
public CrossChainTradeData populateTradeData(Repository repository, byte[] creatorPublicKey, long creationTimestamp, ATStateData atStateData) throws DataException {
byte[] addressBytes = new byte[25]; // for general use
String atAddress = atStateData.getATAddress();
CrossChainTradeData tradeData = new CrossChainTradeData();
tradeData.foreignBlockchain = SupportedBlockchain.RAVENCOIN.name();
tradeData.acctName = NAME;
tradeData.qortalAtAddress = atAddress;
tradeData.qortalCreator = Crypto.toAddress(creatorPublicKey);
tradeData.creationTimestamp = creationTimestamp;
Account atAccount = new Account(repository, atAddress);
tradeData.qortBalance = atAccount.getConfirmedBalance(Asset.QORT);
byte[] stateData = atStateData.getStateData();
ByteBuffer dataByteBuffer = ByteBuffer.wrap(stateData);
dataByteBuffer.position(MachineState.HEADER_LENGTH);
/* Constants */
// Skip creator's trade address
dataByteBuffer.get(addressBytes);
tradeData.qortalCreatorTradeAddress = Base58.encode(addressBytes);
dataByteBuffer.position(dataByteBuffer.position() + 32 - addressBytes.length);
// Creator's Ravencoin/foreign public key hash
tradeData.creatorForeignPKH = new byte[20];
dataByteBuffer.get(tradeData.creatorForeignPKH);
dataByteBuffer.position(dataByteBuffer.position() + 32 - tradeData.creatorForeignPKH.length); // skip to 32 bytes
// We don't use secret-B
tradeData.hashOfSecretB = null;
// Redeem payout
tradeData.qortAmount = dataByteBuffer.getLong();
// Expected RVN amount
tradeData.expectedForeignAmount = dataByteBuffer.getLong();
// Trade timeout
tradeData.tradeTimeout = (int) dataByteBuffer.getLong();
// Skip MESSAGE transaction type
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip expected 'trade' message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip expected 'redeem' message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to creator's address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's Qortal trade address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to message sender
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'trade' message data offset for partner's Ravencoin PKH
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's Ravencoin PKH
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'trade' message data offset for hash-of-secret-A
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to hash-of-secret-A
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip 'redeem' message data offset for partner's Qortal receiving address
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to message data
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip message data length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip pointer to partner's receiving address
dataByteBuffer.position(dataByteBuffer.position() + 8);
/* End of constants / begin variables */
// Skip AT creator's address
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Partner's trade address (if present)
dataByteBuffer.get(addressBytes);
String qortalRecipient = Base58.encode(addressBytes);
dataByteBuffer.position(dataByteBuffer.position() + 32 - addressBytes.length);
// Potential lockTimeA (if in trade mode)
int lockTimeA = (int) dataByteBuffer.getLong();
// AT refund timeout (probably only useful for debugging)
int refundTimeout = (int) dataByteBuffer.getLong();
// Trade-mode refund timestamp (AT 'timestamp' converted to Qortal block height)
long tradeRefundTimestamp = dataByteBuffer.getLong();
// Skip last transaction timestamp
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip block timestamp
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip transaction type
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary result
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary message sender
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Skip message length
dataByteBuffer.position(dataByteBuffer.position() + 8);
// Skip temporary message data
dataByteBuffer.position(dataByteBuffer.position() + 8 * 4);
// Potential hash160 of secret A
byte[] hashOfSecretA = new byte[20];
dataByteBuffer.get(hashOfSecretA);
dataByteBuffer.position(dataByteBuffer.position() + 32 - hashOfSecretA.length); // skip to 32 bytes
// Potential partner's Ravencoin PKH
byte[] partnerRavencoinPKH = new byte[20];
dataByteBuffer.get(partnerRavencoinPKH);
dataByteBuffer.position(dataByteBuffer.position() + 32 - partnerRavencoinPKH.length); // skip to 32 bytes
// Partner's receiving address (if present)
byte[] partnerReceivingAddress = new byte[25];
dataByteBuffer.get(partnerReceivingAddress);
dataByteBuffer.position(dataByteBuffer.position() + 32 - partnerReceivingAddress.length); // skip to 32 bytes
// Trade AT's 'mode'
long modeValue = dataByteBuffer.getLong();
AcctMode mode = AcctMode.valueOf((int) (modeValue & 0xffL));
/* End of variables */
if (mode != null && mode != AcctMode.OFFERING) {
tradeData.mode = mode;
tradeData.refundTimeout = refundTimeout;
tradeData.tradeRefundHeight = new Timestamp(tradeRefundTimestamp).blockHeight;
tradeData.qortalPartnerAddress = qortalRecipient;
tradeData.hashOfSecretA = hashOfSecretA;
tradeData.partnerForeignPKH = partnerRavencoinPKH;
tradeData.lockTimeA = lockTimeA;
if (mode == AcctMode.REDEEMED)
tradeData.qortalPartnerReceivingAddress = Base58.encode(partnerReceivingAddress);
} else {
tradeData.mode = AcctMode.OFFERING;
}
tradeData.duplicateDeprecated();
return tradeData;
}
/** Returns 'offer' MESSAGE payload for trade partner to send to AT creator's trade address. */
public static byte[] buildOfferMessage(byte[] partnerBitcoinPKH, byte[] hashOfSecretA, int lockTimeA) {
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
return Bytes.concat(partnerBitcoinPKH, hashOfSecretA, lockTimeABytes);
}
/** Returns info extracted from 'offer' MESSAGE payload sent by trade partner to AT creator's trade address, or null if not valid. */
public static OfferMessageData extractOfferMessageData(byte[] messageData) {
if (messageData == null || messageData.length != OFFER_MESSAGE_LENGTH)
return null;
OfferMessageData offerMessageData = new OfferMessageData();
offerMessageData.partnerRavencoinPKH = Arrays.copyOfRange(messageData, 0, 20);
offerMessageData.hashOfSecretA = Arrays.copyOfRange(messageData, 20, 40);
offerMessageData.lockTimeA = BitTwiddling.longFromBEBytes(messageData, 40);
return offerMessageData;
}
/** Returns 'trade' MESSAGE payload for AT creator to send to AT. */
public static byte[] buildTradeMessage(String partnerQortalTradeAddress, byte[] partnerBitcoinPKH, byte[] hashOfSecretA, int lockTimeA, int refundTimeout) {
byte[] data = new byte[TRADE_MESSAGE_LENGTH];
byte[] partnerQortalAddressBytes = Base58.decode(partnerQortalTradeAddress);
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
byte[] refundTimeoutBytes = BitTwiddling.toBEByteArray((long) refundTimeout);
System.arraycopy(partnerQortalAddressBytes, 0, data, 0, partnerQortalAddressBytes.length);
System.arraycopy(partnerBitcoinPKH, 0, data, 32, partnerBitcoinPKH.length);
System.arraycopy(refundTimeoutBytes, 0, data, 56, refundTimeoutBytes.length);
System.arraycopy(hashOfSecretA, 0, data, 64, hashOfSecretA.length);
System.arraycopy(lockTimeABytes, 0, data, 88, lockTimeABytes.length);
return data;
}
/** Returns 'cancel' MESSAGE payload for AT creator to cancel trade AT. */
@Override
public byte[] buildCancelMessage(String creatorQortalAddress) {
byte[] data = new byte[CANCEL_MESSAGE_LENGTH];
byte[] creatorQortalAddressBytes = Base58.decode(creatorQortalAddress);
System.arraycopy(creatorQortalAddressBytes, 0, data, 0, creatorQortalAddressBytes.length);
return data;
}
/** Returns 'redeem' MESSAGE payload for trade partner to send to AT. */
public static byte[] buildRedeemMessage(byte[] secretA, String qortalReceivingAddress) {
byte[] data = new byte[REDEEM_MESSAGE_LENGTH];
byte[] qortalReceivingAddressBytes = Base58.decode(qortalReceivingAddress);
System.arraycopy(secretA, 0, data, 0, secretA.length);
System.arraycopy(qortalReceivingAddressBytes, 0, data, 32, qortalReceivingAddressBytes.length);
return data;
}
/** Returns refund timeout (minutes) based on trade partner's 'offer' MESSAGE timestamp and P2SH-A locktime. */
public static int calcRefundTimeout(long offerMessageTimestamp, int lockTimeA) {
// refund should be triggered halfway between offerMessageTimestamp and lockTimeA
return (int) ((lockTimeA - (offerMessageTimestamp / 1000L)) / 2L / 60L);
}
@Override
public byte[] findSecretA(Repository repository, CrossChainTradeData crossChainTradeData) throws DataException {
String atAddress = crossChainTradeData.qortalAtAddress;
String redeemerAddress = crossChainTradeData.qortalPartnerAddress;
// We don't have partner's public key so we check every message to AT
List<MessageTransactionData> messageTransactionsData = repository.getMessageRepository().getMessagesByParticipants(null, atAddress, null, null, null);
if (messageTransactionsData == null)
return null;
// Find 'redeem' message
for (MessageTransactionData messageTransactionData : messageTransactionsData) {
// Check message payload type/encryption
if (messageTransactionData.isText() || messageTransactionData.isEncrypted())
continue;
// Check message payload size
byte[] messageData = messageTransactionData.getData();
if (messageData.length != REDEEM_MESSAGE_LENGTH)
// Wrong payload length
continue;
// Check sender
if (!Crypto.toAddress(messageTransactionData.getSenderPublicKey()).equals(redeemerAddress))
// Wrong sender;
continue;
// Extract secretA
byte[] secretA = new byte[32];
System.arraycopy(messageData, 0, secretA, 0, secretA.length);
byte[] hashOfSecretA = Crypto.hash160(secretA);
if (!Arrays.equals(hashOfSecretA, crossChainTradeData.hashOfSecretA))
continue;
return secretA;
}
return null;
}
}

View File

@@ -57,12 +57,26 @@ public enum SupportedBlockchain {
public ACCT getLatestAcct() {
return DogecoinACCTv3.getInstance();
}
},
RAVENCOIN(Arrays.asList(
Triple.valueOf(RavencoinACCTv3.NAME, RavencoinACCTv3.CODE_BYTES_HASH, RavencoinACCTv3::getInstance)
)) {
@Override
public ForeignBlockchain getInstance() {
return Ravencoin.getInstance();
}
@Override
public ACCT getLatestAcct() {
return RavencoinACCTv3.getInstance();
}
};
private static final Map<ByteArray, Supplier<ACCT>> supportedAcctsByCodeHash = Arrays.stream(SupportedBlockchain.values())
.map(supportedBlockchain -> supportedBlockchain.supportedAccts)
.flatMap(List::stream)
.collect(Collectors.toUnmodifiableMap(triple -> new ByteArray(triple.getB()), Triple::getC));
.collect(Collectors.toUnmodifiableMap(triple -> ByteArray.wrap(triple.getB()), Triple::getC));
private static final Map<String, Supplier<ACCT>> supportedAcctsByName = Arrays.stream(SupportedBlockchain.values())
.map(supportedBlockchain -> supportedBlockchain.supportedAccts)
@@ -94,7 +108,7 @@ public enum SupportedBlockchain {
return getAcctMap();
return blockchain.supportedAccts.stream()
.collect(Collectors.toUnmodifiableMap(triple -> new ByteArray(triple.getB()), Triple::getC));
.collect(Collectors.toUnmodifiableMap(triple -> ByteArray.wrap(triple.getB()), Triple::getC));
}
public static Map<ByteArray, Supplier<ACCT>> getFilteredAcctMap(String specificBlockchain) {
@@ -109,7 +123,7 @@ public enum SupportedBlockchain {
}
public static ACCT getAcctByCodeHash(byte[] codeHash) {
ByteArray wrappedCodeHash = new ByteArray(codeHash);
ByteArray wrappedCodeHash = ByteArray.wrap(codeHash);
Supplier<ACCT> acctInstanceSupplier = supportedAcctsByCodeHash.get(wrappedCodeHash);

View File

@@ -0,0 +1,12 @@
package org.qortal.data.arbitrary;
public class ArbitraryCategoryInfo {
public String id;
public String name;
public ArbitraryCategoryInfo() {
}
}

View File

@@ -0,0 +1,59 @@
package org.qortal.data.arbitrary;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
public class ArbitraryDirectConnectionInfo {
private final byte[] signature;
private final String peerAddress;
private final List<byte[]> hashes;
private final long timestamp;
public ArbitraryDirectConnectionInfo(byte[] signature, String peerAddress, List<byte[]> hashes, long timestamp) {
this.signature = signature;
this.peerAddress = peerAddress;
this.hashes = hashes;
this.timestamp = timestamp;
}
public byte[] getSignature() {
return this.signature;
}
public String getPeerAddress() {
return this.peerAddress;
}
public List<byte[]> getHashes() {
return this.hashes;
}
public long getTimestamp() {
return this.timestamp;
}
public int getHashCount() {
if (this.hashes == null) {
return 0;
}
return this.hashes.size();
}
@Override
public boolean equals(Object other) {
if (other == this)
return true;
if (!(other instanceof ArbitraryDirectConnectionInfo))
return false;
ArbitraryDirectConnectionInfo otherDirectConnectionInfo = (ArbitraryDirectConnectionInfo) other;
return Arrays.equals(this.signature, otherDirectConnectionInfo.getSignature())
&& Objects.equals(this.peerAddress, otherDirectConnectionInfo.getPeerAddress())
&& Objects.equals(this.hashes, otherDirectConnectionInfo.getHashes())
&& Objects.equals(this.timestamp, otherDirectConnectionInfo.getTimestamp());
}
}

View File

@@ -0,0 +1,11 @@
package org.qortal.data.arbitrary;
import org.qortal.network.Peer;
public class ArbitraryFileListResponseInfo extends ArbitraryRelayInfo {
public ArbitraryFileListResponseInfo(String hash58, String signature58, Peer peer, Long timestamp, Long requestTime, Integer requestHops) {
super(hash58, signature58, peer, timestamp, requestTime, requestHops);
}
}

View File

@@ -13,6 +13,7 @@ public class ArbitraryResourceInfo {
public Service service;
public String identifier;
public ArbitraryResourceStatus status;
public ArbitraryResourceMetadata metadata;
public Long size;

View File

@@ -0,0 +1,45 @@
package org.qortal.data.arbitrary;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Category;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.List;
@XmlAccessorType(XmlAccessType.FIELD)
public class ArbitraryResourceMetadata {
private String title;
private String description;
private List<String> tags;
private Category category;
private String categoryName;
public ArbitraryResourceMetadata() {
}
public ArbitraryResourceMetadata(String title, String description, List<String> tags, Category category) {
this.title = title;
this.description = description;
this.tags = tags;
this.category = category;
this.categoryName = category.getName();
}
public static ArbitraryResourceMetadata fromTransactionMetadata(ArbitraryDataTransactionMetadata transactionMetadata) {
if (transactionMetadata == null) {
return null;
}
String title = transactionMetadata.getTitle();
String description = transactionMetadata.getDescription();
List<String> tags = transactionMetadata.getTags();
Category category = transactionMetadata.getCategory();
if (title == null && description == null && tags == null && category == null) {
return null;
}
return new ArbitraryResourceMetadata(title, description, tags, category);
}
}

View File

@@ -94,6 +94,12 @@ public class CrossChainTradeData {
public String acctName;
@Schema(description = "Timestamp when AT creator's trade-bot presence expires")
public Long creatorPresenceExpiry;
@Schema(description = "Timestamp when trade partner's trade-bot presence expires")
public Long partnerPresenceExpiry;
// Constructors
// Necessary for JAXB

View File

@@ -23,6 +23,7 @@ public class GroupData {
private ApprovalThreshold approvalThreshold;
private int minimumBlockDelay;
private int maximumBlockDelay;
public int memberCount;
/** Reference to CREATE_GROUP or UPDATE_GROUP transaction, used to rebuild group during orphaning. */
// No need to ever expose this via API

View File

@@ -0,0 +1,114 @@
package org.qortal.data.network;
import org.qortal.crypto.Crypto;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlTransient;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import java.util.Arrays;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class TradePresenceData {
protected long timestamp;
@XmlJavaTypeAdapter(
type = byte[].class,
value = org.qortal.api.Base58TypeAdapter.class
)
protected byte[] publicKey; // Could be BOB's or ALICE's
// No need to send this via websocket / API
@XmlTransient
protected byte[] signature; // Not always present
protected String atAddress; // Not always present
// Have JAXB use getter instead
@XmlTransient
protected String tradeAddress; // Lazily instantiated
// Constructors
// necessary for JAXB serialization
protected TradePresenceData() {
}
public TradePresenceData(long timestamp, byte[] publicKey, byte[] signature, String atAddress) {
this.timestamp = timestamp;
this.publicKey = publicKey;
this.signature = signature;
this.atAddress = atAddress;
}
public TradePresenceData(long timestamp, byte[] publicKey) {
this(timestamp, publicKey, null, null);
}
public long getTimestamp() {
return this.timestamp;
}
public byte[] getPublicKey() {
return this.publicKey;
}
public byte[] getSignature() {
return this.signature;
}
public String getAtAddress() {
return this.atAddress;
}
// Probably doesn't need synchronization
@XmlElement
public String getTradeAddress() {
if (tradeAddress != null)
return tradeAddress;
tradeAddress = Crypto.toAddress(this.publicKey);
return tradeAddress;
}
// Comparison
@Override
public boolean equals(Object other) {
if (other == this)
return true;
if (!(other instanceof TradePresenceData))
return false;
TradePresenceData otherTradePresenceData = (TradePresenceData) other;
// Very quick comparison
if (otherTradePresenceData.timestamp != this.timestamp)
return false;
if (!Arrays.equals(otherTradePresenceData.publicKey, this.publicKey))
return false;
if (otherTradePresenceData.atAddress != null && !otherTradePresenceData.atAddress.equals(this.atAddress))
return false;
if (this.atAddress != null && !this.atAddress.equals(otherTradePresenceData.atAddress))
return false;
if (!Arrays.equals(otherTradePresenceData.signature, this.signature))
return false;
return true;
}
@Override
public int hashCode() {
// Pretty lazy implementation
return (int) this.timestamp;
}
}

View File

@@ -48,6 +48,7 @@ public class UpdateNameTransactionData extends TransactionData {
public void afterUnmarshal(Unmarshaller u, Object parent) {
this.creatorPublicKey = this.ownerPublicKey;
this.reducedNewName = this.newName != null ? Unicode.sanitize(this.newName) : null;
}
/** From repository */
@@ -62,7 +63,7 @@ public class UpdateNameTransactionData extends TransactionData {
this.nameReference = nameReference;
}
/** From network/API */
/** From network */
public UpdateNameTransactionData(BaseTransactionData baseTransactionData, String name, String newName, String newData) {
this(baseTransactionData, name, newName, newData, Unicode.sanitize(newName), null);
}

View File

@@ -4,6 +4,7 @@ import java.awt.GraphicsEnvironment;
import java.awt.image.BufferedImage;
import java.io.IOException;
import java.io.InputStream;
import java.util.ServiceConfigurationError;
import javax.imageio.ImageIO;
import javax.swing.JOptionPane;
@@ -46,12 +47,12 @@ public class Gui {
this.splashFrame = SplashFrame.getInstance();
}
protected static BufferedImage loadImage(String resourceName) {
protected static BufferedImage loadImage(String resourceName) throws IOException {
try (InputStream in = Gui.class.getResourceAsStream("/images/" + resourceName)) {
return ImageIO.read(in);
} catch (IllegalArgumentException | IOException e) {
} catch (IllegalArgumentException | IOException | ServiceConfigurationError e) {
LOGGER.warn(String.format("Couldn't locate image resource \"images/%s\"", resourceName));
return null;
throw new IOException(String.format("Couldn't locate image resource \"images/%s\"", resourceName));
}
}

View File

@@ -1,6 +1,7 @@
package org.qortal.gui;
import java.awt.*;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.awt.image.BufferedImage;
@@ -29,18 +30,23 @@ public class SplashFrame {
private JLabel statusLabel;
public SplashPanel() {
image = Gui.loadImage(defaultSplash);
try {
image = Gui.loadImage(defaultSplash);
// Add logo
JLabel imageLabel = new JLabel(new ImageIcon(image));
imageLabel.setSize(new Dimension(300, 300));
add(imageLabel);
}
catch (IOException e) {
LOGGER.warn("Unable to load splash panel image");
}
setOpaque(true);
setLayout(new BoxLayout(this, BoxLayout.Y_AXIS));
setBorder(new EmptyBorder(10, 10, 10, 10));
setBackground(Color.BLACK);
// Add logo
JLabel imageLabel = new JLabel(new ImageIcon(image));
imageLabel.setSize(new Dimension(300, 300));
add(imageLabel);
// Add spacing
add(Box.createRigidArea(new Dimension(0, 16)));
@@ -75,15 +81,20 @@ public class SplashFrame {
this.splashDialog = new JFrame();
List<Image> icons = new ArrayList<>();
icons.add(Gui.loadImage("icons/icon16.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_synced.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing_time-alt.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_minting.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing.png"));
icons.add(Gui.loadImage("icons/icon64.png"));
icons.add(Gui.loadImage("icons/Qlogo_128.png"));
this.splashDialog.setIconImages(icons);
try {
List<Image> icons = new ArrayList<>();
icons.add(Gui.loadImage("icons/icon16.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_synced.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing_time-alt.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_minting.png"));
icons.add(Gui.loadImage("icons/qortal_ui_tray_syncing.png"));
icons.add(Gui.loadImage("icons/icon64.png"));
icons.add(Gui.loadImage("icons/Qlogo_128.png"));
this.splashDialog.setIconImages(icons);
}
catch (IOException e) {
LOGGER.warn("Unable to load splash frame icons");
}
this.splashPanel = new SplashPanel();
this.splashDialog.getContentPane().add(this.splashPanel);

View File

@@ -61,7 +61,13 @@ public class SysTray {
this.popupMenu = createJPopupMenu();
// Build TrayIcon without AWT PopupMenu (which doesn't support Unicode)...
this.trayIcon = new TrayIcon(Gui.loadImage("icons/qortal_ui_tray_synced.png"), "qortal", null);
try {
this.trayIcon = new TrayIcon(Gui.loadImage("icons/qortal_ui_tray_synced.png"), "qortal", null);
}
catch (IOException e) {
LOGGER.warn("Unable to load system tray icon");
return;
}
// ...and attach mouse listener instead so we can use JPopupMenu (which does support Unicode)
this.trayIcon.addMouseListener(new MouseAdapter() {
@Override

View File

@@ -100,7 +100,23 @@ public class Network {
private long nextDisconnectionCheck = 0L;
private final List<PeerData> allKnownPeers = new ArrayList<>();
private final List<Peer> connectedPeers = new ArrayList<>();
/**
* Maintain two lists for each subset of peers:
* - A synchronizedList, to be modified when peers are added/removed
* - An immutable List, which is rebuilt automatically to mirror the synchronized list, and is then served to consumers
* This allows for thread safety without having to synchronize every time a thread requests a peer list
*/
private final List<Peer> connectedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableConnectedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<Peer> handshakedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableHandshakedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<Peer> outboundHandshakedPeers = Collections.synchronizedList(new ArrayList<>());
private List<Peer> immutableOutboundHandshakedPeers = Collections.emptyList(); // always rebuilt from mutable, synced list above
private final List<PeerAddress> selfPeers = new ArrayList<>();
private final ExecuteProduceConsume networkEPC;
@@ -119,6 +135,7 @@ public class Network {
private List<String> ourExternalIpAddressHistory = new ArrayList<>();
private String ourExternalIpAddress = null;
private int ourExternalPort = Settings.getInstance().getListenPort();
// Constructors
@@ -236,10 +253,21 @@ public class Network {
}
}
public List<Peer> getConnectedPeers() {
synchronized (this.connectedPeers) {
return new ArrayList<>(this.connectedPeers);
}
public List<Peer> getImmutableConnectedPeers() {
return this.immutableConnectedPeers;
}
public void addConnectedPeer(Peer peer) {
this.connectedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableConnectedPeers = List.copyOf(this.connectedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public void removeConnectedPeer(Peer peer) {
// Firstly remove from handshaked peers
this.removeHandshakedPeer(peer);
this.connectedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableConnectedPeers = List.copyOf(this.connectedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public List<PeerAddress> getSelfPeers() {
@@ -274,16 +302,14 @@ public class Network {
}
// Check if we're already connected to and handshaked with this peer
Peer connectedPeer = null;
synchronized (this.connectedPeers) {
connectedPeer = this.connectedPeers.stream()
Peer connectedPeer = this.getImmutableConnectedPeers().stream()
.filter(p -> p.getPeerData().getAddress().equals(peerAddress))
.findFirst()
.orElse(null);
}
boolean isConnected = (connectedPeer != null);
boolean isHandshaked = this.getHandshakedPeers().stream()
boolean isHandshaked = this.getImmutableHandshakedPeers().stream()
.anyMatch(p -> p.getPeerData().getAddress().equals(peerAddress));
if (isConnected && isHandshaked) {
@@ -327,35 +353,61 @@ public class Network {
/**
* Returns list of connected peers that have completed handshaking.
*/
public List<Peer> getHandshakedPeers() {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED)
.collect(Collectors.toList());
public List<Peer> getImmutableHandshakedPeers() {
return this.immutableHandshakedPeers;
}
public void addHandshakedPeer(Peer peer) {
this.handshakedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableHandshakedPeers = List.copyOf(this.handshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
// Also add to outbound handshaked peers cache
if (peer.isOutbound()) {
this.addOutboundHandshakedPeer(peer);
}
}
public void removeHandshakedPeer(Peer peer) {
this.handshakedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableHandshakedPeers = List.copyOf(this.handshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
// Also remove from outbound handshaked peers cache
if (peer.isOutbound()) {
this.removeOutboundHandshakedPeer(peer);
}
}
/**
* Returns list of peers we connected to that have completed handshaking.
*/
public List<Peer> getOutboundHandshakedPeers() {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.isOutbound() && peer.getHandshakeStatus() == Handshake.COMPLETED)
.collect(Collectors.toList());
public List<Peer> getImmutableOutboundHandshakedPeers() {
return this.immutableOutboundHandshakedPeers;
}
public void addOutboundHandshakedPeer(Peer peer) {
if (!peer.isOutbound()) {
return;
}
this.outboundHandshakedPeers.add(peer); // thread safe thanks to synchronized list
this.immutableOutboundHandshakedPeers = List.copyOf(this.outboundHandshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
public void removeOutboundHandshakedPeer(Peer peer) {
if (!peer.isOutbound()) {
return;
}
this.outboundHandshakedPeers.remove(peer); // thread safe thanks to synchronized list
this.immutableOutboundHandshakedPeers = List.copyOf(this.outboundHandshakedPeers); // also thread safe thanks to synchronized collection's toArray() being fed to List.of(array)
}
/**
* Returns first peer that has completed handshaking and has matching public key.
*/
public Peer getHandshakedPeerWithPublicKey(byte[] publicKey) {
synchronized (this.connectedPeers) {
return this.connectedPeers.stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
&& Arrays.equals(peer.getPeersPublicKey(), publicKey))
.findFirst().orElse(null);
}
return this.getImmutableConnectedPeers().stream()
.filter(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
&& Arrays.equals(peer.getPeersPublicKey(), publicKey))
.findFirst().orElse(null);
}
// Peer list filters
@@ -368,21 +420,15 @@ public class Network {
return this.selfPeers.stream().anyMatch(selfPeer -> selfPeer.equals(peerAddress));
};
/**
* Must be inside <tt>synchronized (this.connectedPeers) {...}</tt>
*/
private final Predicate<PeerData> isConnectedPeer = peerData -> {
PeerAddress peerAddress = peerData.getAddress();
return this.connectedPeers.stream().anyMatch(peer -> peer.getPeerData().getAddress().equals(peerAddress));
return this.getImmutableConnectedPeers().stream().anyMatch(peer -> peer.getPeerData().getAddress().equals(peerAddress));
};
/**
* Must be inside <tt>synchronized (this.connectedPeers) {...}</tt>
*/
private final Predicate<PeerData> isResolvedAsConnectedPeer = peerData -> {
try {
InetSocketAddress resolvedSocketAddress = peerData.getAddress().toSocketAddress();
return this.connectedPeers.stream()
return this.getImmutableConnectedPeers().stream()
.anyMatch(peer -> peer.getResolvedAddress().equals(resolvedSocketAddress));
} catch (UnknownHostException e) {
// Can't resolve - no point even trying to connect
@@ -448,7 +494,7 @@ public class Network {
}
private Task maybeProducePeerMessageTask() {
for (Peer peer : getConnectedPeers()) {
for (Peer peer : getImmutableConnectedPeers()) {
Task peerTask = peer.getMessageTask();
if (peerTask != null) {
return peerTask;
@@ -460,7 +506,7 @@ public class Network {
private Task maybeProducePeerPingTask(Long now) {
// Ask connected peers whether they need a ping
for (Peer peer : getHandshakedPeers()) {
for (Peer peer : getImmutableHandshakedPeers()) {
Task peerTask = peer.getPingTask(now);
if (peerTask != null) {
return peerTask;
@@ -488,7 +534,7 @@ public class Network {
return null;
}
if (getOutboundHandshakedPeers().size() >= minOutboundPeers) {
if (getImmutableOutboundHandshakedPeers().size() >= minOutboundPeers) {
return null;
}
@@ -641,19 +687,18 @@ public class Network {
return;
}
synchronized (this.connectedPeers) {
if (connectedPeers.size() >= maxPeers) {
// We have enough peers
LOGGER.debug("Connection discarded from peer {} because the server is full", address);
socketChannel.close();
return;
}
LOGGER.debug("Connection accepted from peer {}", address);
newPeer = new Peer(socketChannel, channelSelector);
this.connectedPeers.add(newPeer);
if (getImmutableConnectedPeers().size() >= maxPeers) {
// We have enough peers
LOGGER.debug("Connection discarded from peer {} because the server is full", address);
socketChannel.close();
return;
}
LOGGER.debug("Connection accepted from peer {}", address);
newPeer = new Peer(socketChannel, channelSelector);
this.addConnectedPeer(newPeer);
} catch (IOException e) {
if (socketChannel.isOpen()) {
try {
@@ -701,16 +746,14 @@ public class Network {
peers.removeIf(isSelfPeer);
}
synchronized (this.connectedPeers) {
// Don't consider already connected peers (simple address match)
peers.removeIf(isConnectedPeer);
// Don't consider already connected peers (simple address match)
peers.removeIf(isConnectedPeer);
// Don't consider already connected peers (resolved address match)
// XXX This might be too slow if we end up waiting a long time for hostnames to resolve via DNS
peers.removeIf(isResolvedAsConnectedPeer);
// Don't consider already connected peers (resolved address match)
// XXX This might be too slow if we end up waiting a long time for hostnames to resolve via DNS
peers.removeIf(isResolvedAsConnectedPeer);
this.checkLongestConnection(now);
}
this.checkLongestConnection(now);
// Any left?
if (peers.isEmpty()) {
@@ -748,21 +791,16 @@ public class Network {
return false;
}
synchronized (this.connectedPeers) {
this.connectedPeers.add(newPeer);
}
this.addConnectedPeer(newPeer);
this.onPeerReady(newPeer);
return true;
}
private Peer getPeerFromChannel(SocketChannel socketChannel) {
synchronized (this.connectedPeers) {
for (Peer peer : this.connectedPeers) {
if (peer.getSocketChannel() == socketChannel) {
return peer;
}
for (Peer peer : this.getImmutableConnectedPeers()) {
if (peer.getSocketChannel() == socketChannel) {
return peer;
}
}
@@ -775,7 +813,7 @@ public class Network {
}
// Find peers that have reached their maximum connection age, and disconnect them
List<Peer> peersToDisconnect = this.connectedPeers.stream()
List<Peer> peersToDisconnect = this.getImmutableConnectedPeers().stream()
.filter(peer -> !peer.isSyncInProgress())
.filter(peer -> peer.hasReachedMaxConnectionAge())
.collect(Collectors.toList());
@@ -826,9 +864,7 @@ public class Network {
LOGGER.debug("[{}] Failed to connect to peer {}", peer.getPeerConnectionId(), peer);
}
synchronized (this.connectedPeers) {
this.connectedPeers.remove(peer);
}
this.removeConnectedPeer(peer);
}
public void peerMisbehaved(Peer peer) {
@@ -989,6 +1025,9 @@ public class Network {
return;
}
// Add to handshaked peers cache
this.addHandshakedPeer(peer);
// Make a note that we've successfully completed handshake (and when)
peer.getPeerData().setLastConnected(NTP.getTime());
@@ -1128,6 +1167,7 @@ public class Network {
return;
}
String host = parts[0];
try {
InetAddress addr = InetAddress.getByName(host);
if (addr.isAnyLocalAddress() || addr.isSiteLocalAddress()) {
@@ -1138,6 +1178,9 @@ public class Network {
return;
}
// Keep track of the port
this.ourExternalPort = Integer.parseInt(parts[1]);
// Add to the list
this.ourExternalIpAddressHistory.add(host);
@@ -1191,8 +1234,6 @@ public class Network {
public void onExternalIpUpdate(String ipAddress) {
LOGGER.info("External IP address updated to {}", ipAddress);
//ArbitraryDataManager.getInstance().broadcastHostedSignatureList();
}
public String getOurExternalIpAddress() {
@@ -1200,6 +1241,14 @@ public class Network {
return this.ourExternalIpAddress;
}
public String getOurExternalIpAddressAndPort() {
String ipAddress = this.getOurExternalIpAddress();
if (ipAddress == null) {
return null;
}
return String.format("%s:%d", ipAddress, this.ourExternalPort);
}
// Peer-management calls
@@ -1241,7 +1290,7 @@ public class Network {
}
}
for (Peer peer : this.getConnectedPeers()) {
for (Peer peer : this.getImmutableConnectedPeers()) {
peer.disconnect("to be forgotten");
}
@@ -1253,7 +1302,7 @@ public class Network {
try {
InetSocketAddress knownAddress = peerAddress.toSocketAddress();
List<Peer> peers = this.getConnectedPeers();
List<Peer> peers = this.getImmutableConnectedPeers();
peers.removeIf(peer -> !Peer.addressEquals(knownAddress, peer.getResolvedAddress()));
for (Peer peer : peers) {
@@ -1273,7 +1322,8 @@ public class Network {
}
// Disconnect peers that are stuck during handshake
List<Peer> handshakePeers = this.getConnectedPeers();
// Needs a mutable copy of the unmodifiableList
List<Peer> handshakePeers = new ArrayList<>(this.getImmutableConnectedPeers());
// Disregard peers that have completed handshake or only connected recently
handshakePeers.removeIf(peer -> peer.getHandshakeStatus() == Handshake.COMPLETED
@@ -1315,9 +1365,7 @@ public class Network {
peers.removeIf(isNotOldPeer);
// Don't consider already connected peers (simple address match)
synchronized (this.connectedPeers) {
peers.removeIf(isConnectedPeer);
}
peers.removeIf(isConnectedPeer);
for (PeerData peerData : peers) {
LOGGER.debug("Deleting old peer {} from repository", peerData.getAddress().toString());
@@ -1452,7 +1500,7 @@ public class Network {
}
try {
broadcastExecutor.execute(new Broadcaster(this.getHandshakedPeers(), peerMessageBuilder));
broadcastExecutor.execute(new Broadcaster(this.getImmutableHandshakedPeers(), peerMessageBuilder));
} catch (RejectedExecutionException e) {
// Can't execute - probably because we're shutting down, so ignore
}
@@ -1490,7 +1538,7 @@ public class Network {
}
// Close all peer connections
for (Peer peer : this.getConnectedPeers()) {
for (Peer peer : this.getImmutableConnectedPeers()) {
peer.shutdown();
}
}

View File

@@ -0,0 +1,95 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.repository.DataException;
import org.qortal.transform.Transformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
public class ArbitraryMetadataMessage extends Message {
private static final int SIGNATURE_LENGTH = Transformer.SIGNATURE_LENGTH;
private final byte[] signature;
private final ArbitraryDataFile arbitraryMetadataFile;
public ArbitraryMetadataMessage(byte[] signature, ArbitraryDataFile arbitraryDataFile) {
super(MessageType.ARBITRARY_METADATA);
this.signature = signature;
this.arbitraryMetadataFile = arbitraryDataFile;
}
public ArbitraryMetadataMessage(int id, byte[] signature, ArbitraryDataFile arbitraryDataFile) {
super(id, MessageType.ARBITRARY_METADATA);
this.signature = signature;
this.arbitraryMetadataFile = arbitraryDataFile;
}
public byte[] getSignature() {
return this.signature;
}
public ArbitraryDataFile getArbitraryMetadataFile() {
return this.arbitraryMetadataFile;
}
public static Message fromByteBuffer(int id, ByteBuffer byteBuffer) throws UnsupportedEncodingException {
byte[] signature = new byte[SIGNATURE_LENGTH];
byteBuffer.get(signature);
int dataLength = byteBuffer.getInt();
if (byteBuffer.remaining() != dataLength)
return null;
byte[] data = new byte[dataLength];
byteBuffer.get(data);
try {
ArbitraryDataFile arbitraryMetadataFile = new ArbitraryDataFile(data, signature);
return new ArbitraryMetadataMessage(id, signature, arbitraryMetadataFile);
}
catch (DataException e) {
return null;
}
}
@Override
protected byte[] toData() {
if (this.arbitraryMetadataFile == null) {
return null;
}
byte[] data = this.arbitraryMetadataFile.getBytes();
if (data == null) {
return null;
}
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(signature);
bytes.write(Ints.toByteArray(data.length));
bytes.write(data);
return bytes.toByteArray();
} catch (IOException e) {
return null;
}
}
public ArbitraryMetadataMessage cloneWithNewId(int newId) {
ArbitraryMetadataMessage clone = new ArbitraryMetadataMessage(this.signature, this.arbitraryMetadataFile);
clone.setId(newId);
return clone;
}
}

View File

@@ -2,8 +2,11 @@ package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.data.network.PeerData;
import org.qortal.transform.TransformationException;
import org.qortal.transform.Transformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Serialization;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
@@ -19,23 +22,26 @@ public class GetArbitraryDataFileListMessage extends Message {
private static final int SIGNATURE_LENGTH = Transformer.SIGNATURE_LENGTH;
private static final int HASH_LENGTH = TransactionTransformer.SHA256_LENGTH;
private static final int MAX_PEER_ADDRESS_LENGTH = PeerData.MAX_PEER_ADDRESS_SIZE;
private final byte[] signature;
private List<byte[]> hashes;
private final long requestTime;
private int requestHops;
private String requestingPeer;
public GetArbitraryDataFileListMessage(byte[] signature, List<byte[]> hashes, long requestTime, int requestHops) {
this(-1, signature, hashes, requestTime, requestHops);
public GetArbitraryDataFileListMessage(byte[] signature, List<byte[]> hashes, long requestTime, int requestHops, String requestingPeer) {
this(-1, signature, hashes, requestTime, requestHops, requestingPeer);
}
private GetArbitraryDataFileListMessage(int id, byte[] signature, List<byte[]> hashes, long requestTime, int requestHops) {
private GetArbitraryDataFileListMessage(int id, byte[] signature, List<byte[]> hashes, long requestTime, int requestHops, String requestingPeer) {
super(id, MessageType.GET_ARBITRARY_DATA_FILE_LIST);
this.signature = signature;
this.hashes = hashes;
this.requestTime = requestTime;
this.requestHops = requestHops;
this.requestingPeer = requestingPeer;
}
public byte[] getSignature() {
@@ -46,7 +52,7 @@ public class GetArbitraryDataFileListMessage extends Message {
return this.hashes;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException, TransformationException {
byte[] signature = new byte[SIGNATURE_LENGTH];
bytes.get(signature);
@@ -59,10 +65,6 @@ public class GetArbitraryDataFileListMessage extends Message {
if (bytes.hasRemaining()) {
int hashCount = bytes.getInt();
if (bytes.remaining() != hashCount * HASH_LENGTH) {
return null;
}
hashes = new ArrayList<>();
for (int i = 0; i < hashCount; ++i) {
byte[] hash = new byte[HASH_LENGTH];
@@ -71,7 +73,12 @@ public class GetArbitraryDataFileListMessage extends Message {
}
}
return new GetArbitraryDataFileListMessage(id, signature, hashes, requestTime, requestHops);
String requestingPeer = null;
if (bytes.hasRemaining()) {
requestingPeer = Serialization.deserializeSizedStringV2(bytes, MAX_PEER_ADDRESS_LENGTH);
}
return new GetArbitraryDataFileListMessage(id, signature, hashes, requestTime, requestHops, requestingPeer);
}
@Override
@@ -92,6 +99,13 @@ public class GetArbitraryDataFileListMessage extends Message {
bytes.write(hash);
}
}
else {
bytes.write(Ints.toByteArray(0));
}
if (this.requestingPeer != null) {
Serialization.serializeSizedStringV2(bytes, this.requestingPeer);
}
return bytes.toByteArray();
} catch (IOException e) {
@@ -110,4 +124,8 @@ public class GetArbitraryDataFileListMessage extends Message {
this.requestHops = requestHops;
}
public String getRequestingPeer() {
return this.requestingPeer;
}
}

View File

@@ -0,0 +1,83 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.transform.Transformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import static org.qortal.transform.Transformer.INT_LENGTH;
import static org.qortal.transform.Transformer.LONG_LENGTH;
public class GetArbitraryMetadataMessage extends Message {
private static final int SIGNATURE_LENGTH = Transformer.SIGNATURE_LENGTH;
private final byte[] signature;
private final long requestTime;
private int requestHops;
public GetArbitraryMetadataMessage(byte[] signature, long requestTime, int requestHops) {
this(-1, signature, requestTime, requestHops);
}
private GetArbitraryMetadataMessage(int id, byte[] signature, long requestTime, int requestHops) {
super(id, MessageType.GET_ARBITRARY_METADATA);
this.signature = signature;
this.requestTime = requestTime;
this.requestHops = requestHops;
}
public byte[] getSignature() {
return this.signature;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
if (bytes.remaining() != SIGNATURE_LENGTH + LONG_LENGTH + INT_LENGTH)
return null;
byte[] signature = new byte[SIGNATURE_LENGTH];
bytes.get(signature);
long requestTime = bytes.getLong();
int requestHops = bytes.getInt();
return new GetArbitraryMetadataMessage(id, signature, requestTime, requestHops);
}
@Override
protected byte[] toData() {
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(this.signature);
bytes.write(Longs.toByteArray(this.requestTime));
bytes.write(Ints.toByteArray(this.requestHops));
return bytes.toByteArray();
} catch (IOException e) {
return null;
}
}
public long getRequestTime() {
return this.requestTime;
}
public int getRequestHops() {
return this.requestHops;
}
public void setRequestHops(int requestHops) {
this.requestHops = requestHops;
}
}

View File

@@ -0,0 +1,110 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.data.network.TradePresenceData;
import org.qortal.transform.Transformer;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* For requesting trade presences from remote peer, given our list of known trade presences.
*
* Groups of: number of entries, timestamp, then AT trade pubkey for each entry.
*/
public class GetTradePresencesMessage extends Message {
private List<TradePresenceData> tradePresences;
private byte[] cachedData;
public GetTradePresencesMessage(List<TradePresenceData> tradePresences) {
this(-1, tradePresences);
}
private GetTradePresencesMessage(int id, List<TradePresenceData> tradePresences) {
super(id, MessageType.GET_TRADE_PRESENCES);
this.tradePresences = tradePresences;
}
public List<TradePresenceData> getTradePresences() {
return this.tradePresences;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
int groupedEntriesCount = bytes.getInt();
List<TradePresenceData> tradePresences = new ArrayList<>(groupedEntriesCount);
while (groupedEntriesCount > 0) {
long timestamp = bytes.getLong();
for (int i = 0; i < groupedEntriesCount; ++i) {
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
bytes.get(publicKey);
tradePresences.add(new TradePresenceData(timestamp, publicKey));
}
if (bytes.hasRemaining()) {
groupedEntriesCount = bytes.getInt();
} else {
// we've finished
groupedEntriesCount = 0;
}
}
return new GetTradePresencesMessage(id, tradePresences);
}
@Override
protected synchronized byte[] toData() {
if (this.cachedData != null)
return this.cachedData;
// Shortcut in case we have no trade presences
if (this.tradePresences.isEmpty()) {
this.cachedData = Ints.toByteArray(0);
return this.cachedData;
}
// How many of each timestamp
Map<Long, Integer> countByTimestamp = new HashMap<>();
for (TradePresenceData tradePresenceData : this.tradePresences) {
Long timestamp = tradePresenceData.getTimestamp();
countByTimestamp.compute(timestamp, (k, v) -> v == null ? 1 : ++v);
}
// We should know exactly how many bytes to allocate now
int byteSize = countByTimestamp.size() * (Transformer.INT_LENGTH + Transformer.TIMESTAMP_LENGTH)
+ this.tradePresences.size() * Transformer.PUBLIC_KEY_LENGTH;
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream(byteSize);
for (long timestamp : countByTimestamp.keySet()) {
bytes.write(Ints.toByteArray(countByTimestamp.get(timestamp)));
bytes.write(Longs.toByteArray(timestamp));
for (TradePresenceData tradePresenceData : this.tradePresences) {
if (tradePresenceData.getTimestamp() == timestamp)
bytes.write(tradePresenceData.getPublicKey());
}
}
this.cachedData = bytes.toByteArray();
return this.cachedData;
} catch (IOException e) {
return null;
}
}
}

View File

@@ -93,7 +93,13 @@ public abstract class Message {
ARBITRARY_DATA_FILE_LIST(120),
GET_ARBITRARY_DATA_FILE_LIST(121),
ARBITRARY_SIGNATURES(130);
ARBITRARY_SIGNATURES(130),
TRADE_PRESENCES(140),
GET_TRADE_PRESENCES(141),
ARBITRARY_METADATA(150),
GET_ARBITRARY_METADATA(151);
public final int value;
public final Method fromByteBufferMethod;

View File

@@ -0,0 +1,123 @@
package org.qortal.network.message;
import com.google.common.primitives.Ints;
import com.google.common.primitives.Longs;
import org.qortal.data.network.TradePresenceData;
import org.qortal.transform.Transformer;
import org.qortal.utils.Base58;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* For sending list of trade presences to remote peer.
*
* Groups of: number of entries, timestamp, then pubkey + sig + AT address for each entry.
*/
public class TradePresencesMessage extends Message {
private List<TradePresenceData> tradePresences;
private byte[] cachedData;
public TradePresencesMessage(List<TradePresenceData> tradePresences) {
this(-1, tradePresences);
}
private TradePresencesMessage(int id, List<TradePresenceData> tradePresences) {
super(id, MessageType.TRADE_PRESENCES);
this.tradePresences = tradePresences;
}
public List<TradePresenceData> getTradePresences() {
return this.tradePresences;
}
public static Message fromByteBuffer(int id, ByteBuffer bytes) throws UnsupportedEncodingException {
int groupedEntriesCount = bytes.getInt();
List<TradePresenceData> tradePresences = new ArrayList<>(groupedEntriesCount);
while (groupedEntriesCount > 0) {
long timestamp = bytes.getLong();
for (int i = 0; i < groupedEntriesCount; ++i) {
byte[] publicKey = new byte[Transformer.PUBLIC_KEY_LENGTH];
bytes.get(publicKey);
byte[] signature = new byte[Transformer.SIGNATURE_LENGTH];
bytes.get(signature);
byte[] atAddressBytes = new byte[Transformer.ADDRESS_LENGTH];
bytes.get(atAddressBytes);
String atAddress = Base58.encode(atAddressBytes);
tradePresences.add(new TradePresenceData(timestamp, publicKey, signature, atAddress));
}
if (bytes.hasRemaining()) {
groupedEntriesCount = bytes.getInt();
} else {
// we've finished
groupedEntriesCount = 0;
}
}
return new TradePresencesMessage(id, tradePresences);
}
@Override
protected synchronized byte[] toData() {
if (this.cachedData != null)
return this.cachedData;
// Shortcut in case we have no trade presences
if (this.tradePresences.isEmpty()) {
this.cachedData = Ints.toByteArray(0);
return this.cachedData;
}
// How many of each timestamp
Map<Long, Integer> countByTimestamp = new HashMap<>();
for (TradePresenceData tradePresenceData : this.tradePresences) {
Long timestamp = tradePresenceData.getTimestamp();
countByTimestamp.compute(timestamp, (k, v) -> v == null ? 1 : ++v);
}
// We should know exactly how many bytes to allocate now
int byteSize = countByTimestamp.size() * (Transformer.INT_LENGTH + Transformer.TIMESTAMP_LENGTH)
+ this.tradePresences.size() * (Transformer.PUBLIC_KEY_LENGTH + Transformer.SIGNATURE_LENGTH + Transformer.ADDRESS_LENGTH);
try {
ByteArrayOutputStream bytes = new ByteArrayOutputStream(byteSize);
for (long timestamp : countByTimestamp.keySet()) {
bytes.write(Ints.toByteArray(countByTimestamp.get(timestamp)));
bytes.write(Longs.toByteArray(timestamp));
for (TradePresenceData tradePresenceData : this.tradePresences) {
if (tradePresenceData.getTimestamp() == timestamp) {
bytes.write(tradePresenceData.getPublicKey());
bytes.write(tradePresenceData.getSignature());
bytes.write(Base58.decode(tradePresenceData.getAtAddress()));
}
}
}
this.cachedData = bytes.toByteArray();
return this.cachedData;
} catch (IOException e) {
return null;
}
}
}

View File

@@ -76,6 +76,9 @@ public interface AccountRepository {
*/
public void setBlocksMintedAdjustment(AccountData accountData) throws DataException;
/** Returns account's minted block count or null if account not found. */
public Integer getMintedBlockCount(String address) throws DataException;
/**
* Saves account's minted block count and public key if present, in repository.
* <p>
@@ -149,6 +152,8 @@ public interface AccountRepository {
public RewardShareData getRewardShare(byte[] rewardSharePublicKey) throws DataException;
public List<byte[]> getRewardSharePublicKeys() throws DataException;
public boolean isRewardSharePublicKey(byte[] publicKey) throws DataException;
/** Returns number of active reward-shares involving passed public key as the minting account only. */

View File

@@ -30,17 +30,4 @@ public interface ArbitraryRepository {
public List<ArbitraryResourceNameInfo> getArbitraryResourceCreatorNames(Service service, String identifier, boolean defaultResource, Integer limit, Integer offset, Boolean reverse) throws DataException;
public List<ArbitraryPeerData> getArbitraryPeerDataForSignature(byte[] signature) throws DataException;
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndPeer(byte[] signature, String peerAddress) throws DataException;
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndHost(byte[] signature, String host) throws DataException;
public void save(ArbitraryPeerData arbitraryPeerData) throws DataException;
public void delete(ArbitraryPeerData arbitraryPeerData) throws DataException;
public void deleteArbitraryPeersWithSignature(byte[] signature) throws DataException;
}

View File

@@ -23,7 +23,7 @@ import java.util.*;
public class BlockArchiveReader {
private static BlockArchiveReader instance;
private Map<String, Triple<Integer, Integer, Integer>> fileListCache = Collections.synchronizedMap(new HashMap<>());
private Map<String, Triple<Integer, Integer, Integer>> fileListCache;
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveReader.class);
@@ -63,11 +63,11 @@ public class BlockArchiveReader {
map.put(filename, new Triple(startHeight, endHeight, range));
}
}
this.fileListCache = map;
this.fileListCache = Map.copyOf(map);
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockAtHeight(int height) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -94,7 +94,7 @@ public class BlockArchiveReader {
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockWithSignature(
byte[] signature, Repository repository) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -145,22 +145,24 @@ public class BlockArchiveReader {
}
private String getFilenameForHeight(int height) {
synchronized (this.fileListCache) {
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry) it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (this.fileListCache == null) {
this.fetchFileList();
}
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry) it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
}
@@ -168,8 +170,7 @@ public class BlockArchiveReader {
}
public byte[] fetchSerializedBlockBytesForSignature(byte[] signature, boolean includeHeightPrefix, Repository repository) {
if (this.fileListCache.isEmpty()) {
if (this.fileListCache == null) {
this.fetchFileList();
}
@@ -280,7 +281,7 @@ public class BlockArchiveReader {
}
public void invalidateFileListCache() {
this.fileListCache.clear();
this.fileListCache = null;
}
}

View File

@@ -419,8 +419,8 @@ public class Bootstrap {
downloaded += bytesRead;
if (fileSize > 0) {
int progress = (int)((double)downloaded / (double)fileSize * 100);
SplashFrame.getInstance().updateStatus(String.format("Downloading %s bootstrap... (%d%%)", type, progress));
double progress = (double)downloaded / (double)fileSize * 100;
SplashFrame.getInstance().updateStatus(String.format("Downloading %s bootstrap... (%.1f%%)", type, progress));
}
}

View File

@@ -241,6 +241,20 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public Integer getMintedBlockCount(String address) throws DataException {
String sql = "SELECT blocks_minted FROM Accounts WHERE account = ?";
try (ResultSet resultSet = this.repository.checkedExecute(sql, address)) {
if (resultSet == null)
return null;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch account's minted block count from repository", e);
}
}
@Override
public void setMintedBlockCount(AccountData accountData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("Accounts");
@@ -633,6 +647,27 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public List<byte[]> getRewardSharePublicKeys() throws DataException {
String sql = "SELECT reward_share_public_key FROM RewardShares ORDER BY reward_share_public_key";
List<byte[]> rewardSharePublicKeys = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return null;
do {
byte[] rewardSharePublicKey = resultSet.getBytes(1);
rewardSharePublicKeys.add(rewardSharePublicKey);
} while (resultSet.next());
return rewardSharePublicKeys;
} catch (SQLException e) {
throw new DataException("Unable to fetch reward-share public keys from repository", e);
}
}
@Override
public boolean isRewardSharePublicKey(byte[] publicKey) throws DataException {
try {

View File

@@ -499,149 +499,4 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
}
}
// Peer file tracking
/**
* Fetch a list of peers that have reported to be holding chunks related to
* supplied transaction signature.
* @param signature
* @return a list of ArbitraryPeerData objects, or null if none found
* @throws DataException
*/
@Override
public List<ArbitraryPeerData> getArbitraryPeerDataForSignature(byte[] signature) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ?";
List<ArbitraryPeerData> arbitraryPeerData = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature)) {
if (resultSet == null)
return null;
do {
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData peerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
arbitraryPeerData.add(peerData);
} while (resultSet.next());
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndPeer(byte[] signature, String peerAddress) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ? AND peer_address = ?";
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature, peerAddress)) {
if (resultSet == null)
return null;
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
public ArbitraryPeerData getArbitraryPeerDataForSignatureAndHost(byte[] signature, String host) throws DataException {
// Hash the signature so it fits within 32 bytes
byte[] hashedSignature = Crypto.digest(signature);
// Create a host wildcard string which allows any port
String hostWildcard = String.format("%s:%%", host);
String sql = "SELECT hash, peer_address, successes, failures, last_attempted, last_retrieved " +
"FROM ArbitraryPeers " +
"WHERE hash = ? AND peer_address LIKE ?";
try (ResultSet resultSet = this.repository.checkedExecute(sql, hashedSignature, hostWildcard)) {
if (resultSet == null)
return null;
byte[] hash = resultSet.getBytes(1);
String peerAddr = resultSet.getString(2);
Integer successes = resultSet.getInt(3);
Integer failures = resultSet.getInt(4);
Long lastAttempted = resultSet.getLong(5);
Long lastRetrieved = resultSet.getLong(6);
ArbitraryPeerData arbitraryPeerData = new ArbitraryPeerData(hash, peerAddr, successes, failures,
lastAttempted, lastRetrieved);
return arbitraryPeerData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary peer data from repository", e);
}
}
@Override
public void save(ArbitraryPeerData arbitraryPeerData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("ArbitraryPeers");
saveHelper.bind("hash", arbitraryPeerData.getHash())
.bind("peer_address", arbitraryPeerData.getPeerAddress())
.bind("successes", arbitraryPeerData.getSuccesses())
.bind("failures", arbitraryPeerData.getFailures())
.bind("last_attempted", arbitraryPeerData.getLastAttempted())
.bind("last_retrieved", arbitraryPeerData.getLastRetrieved());
try {
saveHelper.execute(this.repository);
} catch (SQLException e) {
throw new DataException("Unable to save ArbitraryPeerData into repository", e);
}
}
@Override
public void delete(ArbitraryPeerData arbitraryPeerData) throws DataException {
try {
// Remove peer/hash combination
this.repository.delete("ArbitraryPeers", "hash = ? AND peer_address = ?",
arbitraryPeerData.getHash(), arbitraryPeerData.getPeerAddress());
} catch (SQLException e) {
throw new DataException("Unable to delete arbitrary peer data from repository", e);
}
}
@Override
public void deleteArbitraryPeersWithSignature(byte[] signature) throws DataException {
byte[] hash = Crypto.digest(signature);
try {
// Remove all records of peers hosting supplied signature
this.repository.delete("ArbitraryPeers", "hash = ?", hash);
} catch (SQLException e) {
throw new DataException("Unable to delete arbitrary peer data from repository", e);
}
}
}

View File

@@ -40,8 +40,8 @@ public class HSQLDBDatabaseArchiving {
return false;
}
LOGGER.info("Building block archive - this process could take a while... (approx. 15 mins on high spec)");
SplashFrame.getInstance().updateStatus("Building block archive (takes 60+ mins)...");
LOGGER.info("Building block archive - this process could take a while...");
SplashFrame.getInstance().updateStatus("Building block archive...");
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
int startHeight = 0;

View File

@@ -959,6 +959,11 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("CREATE INDEX SellNameNameIndex ON SellNameTransactions (name)");
break;
case 41:
// Drop the ArbitraryPeers table as it's no longer needed
stmt.execute("DROP TABLE ArbitraryPeers");
break;
default:
// nothing to do
return false;

View File

@@ -26,6 +26,7 @@ import org.qortal.controller.arbitrary.ArbitraryDataStorageManager.*;
import org.qortal.crosschain.Bitcoin.BitcoinNet;
import org.qortal.crosschain.Litecoin.LitecoinNet;
import org.qortal.crosschain.Dogecoin.DogecoinNet;
import org.qortal.crosschain.Ravencoin.RavencoinNet;
import org.qortal.utils.EnumUtils;
// All properties to be converted to JSON via JAXB
@@ -190,7 +191,7 @@ public class Settings {
/** Maximum number of peer connections we allow. */
private int maxPeers = 32;
/** Maximum number of threads for network engine. */
private int maxNetworkThreadPoolSize = 20;
private int maxNetworkThreadPoolSize = 32;
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
private int networkPoWComputePoolSize = 2;
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
@@ -222,6 +223,7 @@ public class Settings {
private BitcoinNet bitcoinNet = BitcoinNet.MAIN;
private LitecoinNet litecoinNet = LitecoinNet.MAIN;
private DogecoinNet dogecoinNet = DogecoinNet.MAIN;
private RavencoinNet ravencoinNet = RavencoinNet.MAIN;
// Also crosschain-related:
/** Whether to show SysTray pop-up notifications when trade-bot entries change state */
private boolean tradebotSystrayEnabled = false;
@@ -245,7 +247,6 @@ public class Settings {
private String[] bootstrapHosts = new String[] {
"http://bootstrap.qortal.org",
"http://bootstrap2.qortal.org",
"http://81.169.136.59",
"http://62.171.190.193"
};
@@ -681,6 +682,10 @@ public class Settings {
return this.dogecoinNet;
}
public RavencoinNet getRavencoinNet() {
return this.ravencoinNet;
}
public boolean isTradebotSystrayEnabled() {
return this.tradebotSystrayEnabled;
}

View File

@@ -14,9 +14,6 @@ import org.qortal.data.PaymentData;
import org.qortal.data.naming.NameData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.network.Network;
import org.qortal.network.message.ArbitrarySignaturesMessage;
import org.qortal.network.message.Message;
import org.qortal.payment.Payment;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -222,15 +219,6 @@ public class ArbitraryTransaction extends Transaction {
if (arbitraryTransactionData.getName() != null) {
ArbitraryDataManager.getInstance().invalidateCache(arbitraryTransactionData);
}
// We also need to broadcast to the network that we are now hosting files for this transaction,
// but only if these files are in accordance with our storage policy
if (ArbitraryDataStorageManager.getInstance().canStoreData(arbitraryTransactionData)) {
// Use a null peer address to indicate our own
byte[] signature = arbitraryTransactionData.getSignature();
Message arbitrarySignatureMessage = new ArbitrarySignaturesMessage(null, 0, Arrays.asList(signature));
Network.getInstance().broadcast(broadcastPeer -> arbitrarySignatureMessage);
}
}
}

View File

@@ -29,7 +29,7 @@ public class ChatTransaction extends Transaction {
public static final int MAX_DATA_SIZE = 256;
public static final int POW_BUFFER_SIZE = 8 * 1024 * 1024; // bytes
public static final int POW_DIFFICULTY_WITH_QORT = 8; // leading zero bits
public static final int POW_DIFFICULTY_NO_QORT = 14; // leading zero bits
public static final int POW_DIFFICULTY_NO_QORT = 12; // leading zero bits
// Constructors

View File

@@ -13,6 +13,8 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.Account;
import org.qortal.controller.Controller;
import org.qortal.controller.OnlineAccountsManager;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.ACCT;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crypto.Crypto;
@@ -47,7 +49,7 @@ public class PresenceTransaction extends Transaction {
REWARD_SHARE(0) {
@Override
public long getLifetime() {
return Controller.ONLINE_TIMESTAMP_MODULUS;
return OnlineAccountsManager.ONLINE_TIMESTAMP_MODULUS;
}
},
TRADE_BOT(1) {
@@ -183,7 +185,7 @@ public class PresenceTransaction extends Transaction {
String signerAddress = Crypto.toAddress(this.transactionData.getCreatorPublicKey());
for (ATData atData : atsData) {
ByteArray atCodeHash = new ByteArray(atData.getCodeHash());
ByteArray atCodeHash = ByteArray.wrap(atData.getCodeHash());
Supplier<ACCT> acctSupplier = acctSuppliersByCodeHash.get(atCodeHash);
if (acctSupplier == null)
continue;
@@ -191,12 +193,16 @@ public class PresenceTransaction extends Transaction {
CrossChainTradeData crossChainTradeData = acctSupplier.get().populateTradeData(repository, atData);
// OK if signer's public key (in address form) matches Bob's trade public key (in address form)
if (signerAddress.equals(crossChainTradeData.qortalCreatorTradeAddress))
if (signerAddress.equals(crossChainTradeData.qortalCreatorTradeAddress)) {
TradeBot.getInstance().bridgePresence(this.presenceTransactionData.getTimestamp(), this.transactionData.getCreatorPublicKey(), timestampSignature, atData.getATAddress());
return ValidationResult.OK;
}
// OK if signer's public key (in address form) matches Alice's trade public key (in address form)
if (signerAddress.equals(crossChainTradeData.qortalPartnerAddress))
if (signerAddress.equals(crossChainTradeData.qortalPartnerAddress)) {
TradeBot.getInstance().bridgePresence(this.presenceTransactionData.getTimestamp(), this.transactionData.getCreatorPublicKey(), timestampSignature, atData.getATAddress());
return ValidationResult.OK;
}
}
return ValidationResult.AT_UNKNOWN;
@@ -204,6 +210,9 @@ public class PresenceTransaction extends Transaction {
@Override
public boolean isSignatureValid() {
return false;
/*
byte[] signature = this.transactionData.getSignature();
if (signature == null)
return false;
@@ -226,6 +235,7 @@ public class PresenceTransaction extends Transaction {
// Check nonce
return MemoryPoW.verify2(transactionBytes, POW_BUFFER_SIZE, POW_DIFFICULTY, nonce);
*/
}
/**

View File

@@ -39,11 +39,7 @@ public class RegisterNameTransaction extends Transaction {
@Override
public long getUnitFee(Long timestamp) {
// Use a higher unit fee after the fee increase timestamp
if (timestamp > BlockChain.getInstance().getNameRegistrationUnitFeeTimestamp()) {
return BlockChain.getInstance().getNameRegistrationUnitFee();
}
return BlockChain.getInstance().getUnitFee();
return BlockChain.getInstance().getNameRegistrationUnitFeeAtTimestamp(timestamp);
}
// Navigation

View File

@@ -118,10 +118,13 @@ public class UpdateNameTransaction extends Transaction {
if (!owner.getAddress().equals(nameData.getOwner()))
return ValidationResult.INVALID_NAME_OWNER;
// Check new name isn't already taken, unless it is the same name (this allows for case-adjusting renames)
NameData newNameData = this.repository.getNameRepository().fromReducedName(this.updateNameTransactionData.getReducedNewName());
if (newNameData != null && !newNameData.getName().equals(nameData.getName()))
return ValidationResult.NAME_ALREADY_REGISTERED;
// Additional checks if transaction intends to change name
if (!this.updateNameTransactionData.getNewName().isEmpty()) {
// Check new name isn't already taken, unless it is the same name (this allows for case-adjusting renames)
NameData newNameData = this.repository.getNameRepository().fromReducedName(this.updateNameTransactionData.getReducedNewName());
if (newNameData != null && !newNameData.getName().equals(nameData.getName()))
return ValidationResult.NAME_ALREADY_REGISTERED;
}
return ValidationResult.OK;
}

View File

@@ -8,12 +8,16 @@ public class ByteArray implements Comparable<ByteArray> {
private int hash;
public final byte[] value;
public ByteArray(byte[] value) {
this.value = Objects.requireNonNull(value);
private ByteArray(byte[] value) {
this.value = value;
}
public static ByteArray of(byte[] value) {
return new ByteArray(value);
public static ByteArray wrap(byte[] value) {
return new ByteArray(Objects.requireNonNull(value));
}
public static ByteArray copyOf(byte[] value) {
return new ByteArray(Arrays.copyOf(value, value.length));
}
@Override
@@ -36,12 +40,7 @@ public class ByteArray implements Comparable<ByteArray> {
byte[] val = this.value;
if (h == 0 && val.length > 0) {
h = 1;
for (int i = 0; i < val.length; ++i)
h = 31 * h + val[i];
this.hash = h;
this.hash = h = Arrays.hashCode(val);
}
return h;
}
@@ -53,24 +52,7 @@ public class ByteArray implements Comparable<ByteArray> {
}
public int compareToPrimitive(byte[] otherValue) {
byte[] val = this.value;
if (val.length < otherValue.length)
return -1;
if (val.length > otherValue.length)
return 1;
for (int i = 0; i < val.length; ++i) {
int a = val[i] & 0xFF;
int b = otherValue[i] & 0xFF;
if (a < b)
return -1;
if (a > b)
return 1;
}
return 0;
return Arrays.compareUnsigned(this.value, otherValue);
}
public String toString() {

View File

@@ -114,8 +114,10 @@ public abstract class ExecuteProduceConsume implements Runnable {
if (this.activeThreadCount > this.greatestActiveThreadCount)
this.greatestActiveThreadCount = this.activeThreadCount;
this.logger.trace(() -> String.format("[%d] started, hasThreadPending was: %b, activeThreadCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] started, hasThreadPending was: %b, activeThreadCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount));
}
// Defer clearing hasThreadPending to prevent unnecessary threads waiting to produce...
wasThreadPending = this.hasThreadPending;
@@ -128,7 +130,9 @@ public abstract class ExecuteProduceConsume implements Runnable {
while (!Thread.currentThread().isInterrupted()) {
Task task = null;
this.logger.trace(() -> String.format("[%d] waiting to produce...", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] waiting to produce...", Thread.currentThread().getId()));
}
synchronized (this) {
if (wasThreadPending) {
@@ -138,8 +142,10 @@ public abstract class ExecuteProduceConsume implements Runnable {
}
final boolean lambdaCanIdle = canBlock;
this.logger.trace(() -> String.format("[%d] producing, activeThreadCount: %d, consumerCount: %d, canBlock is %b...",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount, lambdaCanIdle));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] producing, activeThreadCount: %d, consumerCount: %d, canBlock is %b...",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount, lambdaCanIdle));
}
final long beforeProduce = isLoggerTraceEnabled ? System.currentTimeMillis() : 0;
@@ -152,18 +158,24 @@ public abstract class ExecuteProduceConsume implements Runnable {
this.logger.warn(() -> String.format("[%d] exception while trying to produce task", Thread.currentThread().getId()), e);
}
this.logger.trace(() -> String.format("[%d] producing took %dms", Thread.currentThread().getId(), System.currentTimeMillis() - beforeProduce));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] producing took %dms", Thread.currentThread().getId(), System.currentTimeMillis() - beforeProduce));
}
}
if (task == null)
synchronized (this) {
this.logger.trace(() -> String.format("[%d] no task, activeThreadCount: %d, consumerCount: %d",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] no task, activeThreadCount: %d, consumerCount: %d",
Thread.currentThread().getId(), this.activeThreadCount, this.consumerCount));
}
if (this.activeThreadCount > this.consumerCount + 1) {
--this.activeThreadCount;
this.logger.trace(() -> String.format("[%d] ending, activeThreadCount now: %d",
Thread.currentThread().getId(), this.activeThreadCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] ending, activeThreadCount now: %d",
Thread.currentThread().getId(), this.activeThreadCount));
}
return;
}
@@ -180,12 +192,16 @@ public abstract class ExecuteProduceConsume implements Runnable {
++this.tasksProduced;
++this.consumerCount;
this.logger.trace(() -> String.format("[%d] hasThreadPending: %b, activeThreadCount: %d, consumerCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount, this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] hasThreadPending: %b, activeThreadCount: %d, consumerCount now: %d",
Thread.currentThread().getId(), this.hasThreadPending, this.activeThreadCount, this.consumerCount));
}
// If we have no thread pending and no excess of threads then we should spawn a fresh thread
if (!this.hasThreadPending && this.activeThreadCount <= this.consumerCount + 1) {
this.logger.trace(() -> String.format("[%d] spawning another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] spawning another thread", Thread.currentThread().getId()));
}
this.hasThreadPending = true;
try {
@@ -193,15 +209,21 @@ public abstract class ExecuteProduceConsume implements Runnable {
} catch (RejectedExecutionException e) {
++this.spawnFailures;
this.hasThreadPending = false;
this.logger.trace(() -> String.format("[%d] failed to spawn another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] failed to spawn another thread", Thread.currentThread().getId()));
}
this.onSpawnFailure();
}
} else {
this.logger.trace(() -> String.format("[%d] NOT spawning another thread", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] NOT spawning another thread", Thread.currentThread().getId()));
}
}
}
this.logger.trace(() -> String.format("[%d] performing task...", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] performing task...", Thread.currentThread().getId()));
}
try {
task.perform(); // This can block for a while
@@ -212,14 +234,18 @@ public abstract class ExecuteProduceConsume implements Runnable {
this.logger.warn(() -> String.format("[%d] exception while performing task", Thread.currentThread().getId()), e);
}
this.logger.trace(() -> String.format("[%d] finished task", Thread.currentThread().getId()));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] finished task", Thread.currentThread().getId()));
}
synchronized (this) {
++this.tasksConsumed;
--this.consumerCount;
this.logger.trace(() -> String.format("[%d] consumerCount now: %d",
Thread.currentThread().getId(), this.consumerCount));
if (this.isLoggerTraceEnabled) {
this.logger.trace(() -> String.format("[%d] consumerCount now: %d",
Thread.currentThread().getId(), this.consumerCount));
}
// Quicker, non-blocking produce next round
canBlock = false;

View File

@@ -18,6 +18,9 @@ import java.util.TreeMap;
import com.google.common.base.CharMatcher;
import com.ibm.icu.text.CaseMap;
import com.ibm.icu.text.Normalizer2;
import com.ibm.icu.text.UnicodeSet;
import net.codebox.homoglyph.HomoglyphBuilder;
public abstract class Unicode {
@@ -31,6 +34,8 @@ public abstract class Unicode {
public static final String ZERO_WIDTH_NO_BREAK_SPACE = "\ufeff";
public static final CharMatcher ZERO_WIDTH_CHAR_MATCHER = CharMatcher.anyOf(ZERO_WIDTH_SPACE + ZERO_WIDTH_NON_JOINER + ZERO_WIDTH_JOINER + WORD_JOINER + ZERO_WIDTH_NO_BREAK_SPACE);
private static final UnicodeSet removableUniset = new UnicodeSet("[[:Mark:][:Other:]]").freeze();
private static int[] homoglyphCodePoints;
private static int[] reducedCodePoints;
@@ -59,7 +64,7 @@ public abstract class Unicode {
public static String normalize(String input) {
String output;
// Normalize
// Normalize using NFKC to recompose in canonical form
output = Normalizer.normalize(input, Form.NFKC);
// Remove zero-width code-points, used for rendering
@@ -91,8 +96,8 @@ public abstract class Unicode {
public static String sanitize(String input) {
String output;
// Normalize
output = Normalizer.normalize(input, Form.NFKD);
// Normalize using NFKD to decompose into individual combining code points
output = Normalizer2.getNFKDInstance().normalize(input);
// Remove zero-width code-points, used for rendering
output = removeZeroWidth(output);
@@ -100,11 +105,11 @@ public abstract class Unicode {
// Normalize whitespace
output = CharMatcher.whitespace().trimAndCollapseFrom(output, ' ');
// Remove accents, combining marks
output = output.replaceAll("[\\p{M}\\p{C}]", "");
// Remove accents, combining marks - see https://www.unicode.org/reports/tr44/#GC_Values_Table
output = removableUniset.stripFrom(output, true);
// Convert to lowercase
output = output.toLowerCase(Locale.ROOT);
output = CaseMap.toLower().apply(Locale.ROOT, output);
// Reduce homoglyphs
output = reduceHomoglyphs(output);

View File

@@ -4,8 +4,9 @@
"maxBlockSize": 2097152,
"maxBytesPerUnitFee": 1024,
"unitFee": "0.001",
"nameRegistrationUnitFee": "5",
"nameRegistrationUnitFeeTimestamp": 1645372800000,
"nameRegistrationUnitFees": [
{ "timestamp": 1645372800000, "fee": "5" }
],
"useBrokenMD160ForAddresses": false,
"requireGroupForApproval": false,
"defaultGroupId": 0,

View File

@@ -81,4 +81,3 @@ ORDER_SIZE_TOO_SMALL = order amount too low
FILE_NOT_FOUND = Datei nicht gefunden
NO_REPLY = peer did not reply with data

View File

@@ -68,7 +68,7 @@ ORDER_UNKNOWN = unknown asset order ID
GROUP_UNKNOWN = group unknown
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blockchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
@@ -80,8 +80,4 @@ ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
ORDER_SIZE_TOO_SMALL = order size too small
FILE_NOT_FOUND = file not found
NO_REPLY = peer didn't reply within the allowed time

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "es",
### Common ###
JSON = no se pudo analizar el mensaje JSON
INSUFFICIENT_BALANCE = saldo insuficiente
UNAUTHORIZED = Llamada API no autorizada
REPOSITORY_ISSUE = error de repositorio
NON_PRODUCTION = esta llamada API no está permitida para sistemas de producción
BLOCKCHAIN_NEEDS_SYNC = blockchain necesita sincronizarse primero
NO_TIME_SYNC = aún no hay sincronización de reloj
### Validation ###
INVALID_SIGNATURE = firma no válida
INVALID_ADDRESS = dirección no válida
INVALID_PUBLIC_KEY = clave pública no válida
INVALID_DATA = datos no válidos
INVALID_NETWORK_ADDRESS = dirección de red no válida
ADDRESS_UNKNOWN = dirección de cuenta desconocida
INVALID_CRITERIA = criterio de búsqueda no válido
INVALID_REFERENCE = referencia no válida
TRANSFORMATION_ERROR = no se pudo transformar JSON en transacción
INVALID_PRIVATE_KEY = clave privada no válida
INVALID_HEIGHT = altura de bloque no válida
CANNOT_MINT = la cuenta no puede acuñar
### Blocks ###
BLOCK_UNKNOWN = bloque desconocido
### Transactions ###
TRANSACTION_UNKNOWN = transacción desconocida
PUBLIC_KEY_NOT_FOUND = clave pública no encontrada
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = transacción no válida: %s (%s)
### Naming ###
NAME_UNKNOWN = nombre desconocido
### Asset ###
INVALID_ASSET_ID = ID de recurso no válido
INVALID_ORDER_ID = ID de pedido de activo no válido
ORDER_UNKNOWN = ID de pedido de activo desconocido
### Groups ###
GROUP_UNKNOWN = grupo desconocido
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = problema de cadena de bloques extranjera o red ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = saldo insuficiente en blockchain extranjera
FOREIGN_BLOCKCHAIN_TOO_SOON = demasiado pronto para transmitir transacciones de blockchain extranjeras (LockTime/mediana de tiempo de bloqueo)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = importe del pedido demasiado bajo
### Data ###
FILE_NOT_FOUND = archivo no encontrado
NO_REPLY = el compañero no respondió dentro del tiempo permitido

View File

@@ -1,10 +1,7 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Kielen muuttaminen suomeksi tapahtuu settings.json-tiedostossa
#
# "localeLang": "fi",
# muista pilkku lopussa jos komento ei ole viimeisellä rivillä
### Common ###
JSON = JSON-viestin jaottelu epäonnistui
@@ -83,4 +80,4 @@ ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data
NO_REPLY = peer did not reply with data

View File

@@ -1,24 +1,46 @@
### Commun ###
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "fr",
### Commun ###
JSON = échec de l'analyse du message JSON
INSUFFICIENT_BALANCE = balance insuffisante
UNAUTHORIZED = appel de lAPI non autorisé
REPOSITORY_ISSUE = erreur de dépôt
NON_PRODUCTION = cet appel API n'est pas autorisé pour les systèmes en production
BLOCKCHAIN_NEEDS_SYNC = la blockchain doit d'abord être synchronisée
NO_TIME_SYNC = heure pas encore synchronisée
### Validation ###
INVALID_SIGNATURE = signature invalide
INVALID_ADDRESS = adresse invalide
INVALID_PUBLIC_KEY = clé publique invalide
INVALID_DATA = données invalides
INVALID_NETWORK_ADDRESS = adresse réseau invalide
ADDRESS_UNKNOWN = adresse de compte inconnue
INVALID_CRITERIA = critère de recherche invalide
INVALID_REFERENCE = référence invalide
TRANSFORMATION_ERROR = ne peut pas transformer JSON en transaction
INVALID_PRIVATE_KEY = clé privée invalide
INVALID_HEIGHT = hauteur de bloc invalide
CANNOT_MINT = le compte ne peut pas mint
### Blocks ###
@@ -26,6 +48,7 @@ BLOCK_UNKNOWN = bloc inconnu
### Transactions ###
TRANSACTION_UNKNOWN = opération inconnue
PUBLIC_KEY_NOT_FOUND = clé publique introuvable
# celui-ci est spécial dans le sens où l'appelant doit passer deux chaînes supplémentaires, d'où les deux %s
@@ -36,7 +59,9 @@ NAME_UNKNOWN = nom inconnu
### Asset ###
INVALID_ASSET_ID = identifiant d'actif invalide
INVALID_ORDER_ID = identifiant de commande d'actif non valide
ORDER_UNKNOWN = identifiant d'ordre d'actif inconnu
### Groupes ###
@@ -44,7 +69,9 @@ GROUP_UNKNOWN = groupe inconnu
### Blockchain étrangère ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = Problème blokchain étrangère ou de réseau ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = solde insuffisant sur la blockchain étrangère
FOREIGN_BLOCKCHAIN_TOO_SOON = trop tôt pour diffuser la transaction sur la blockchain étrangère (temps de verrouillage/temps de bloc médian)
### Portail de trading ###
@@ -52,4 +79,5 @@ ORDER_SIZE_TOO_SMALL = montant de commande trop bas
### Données ###
FILE_NOT_FOUND = fichier introuvable
NO_REPLY = le pair n'a pas renvoyé de données
NO_REPLY = le pair n'a pas renvoyé de données

View File

@@ -1,8 +1,5 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Magyar myelvre forditotta: Szkíta (Scythian). 2021 Augusztus 7.
# Az alkalmazás nyelvének magyarra való változtatása a settings.json oldalon történik.
# Keys are from api.ApiError enum # Magyar nyelvre forditotta: Szkíta (Scythian). 2021 Augusztus 7.
# "localeLang": "hu",
@@ -15,7 +12,7 @@ UNAUTHORIZED = nem engedélyezett API-hívás
REPOSITORY_ISSUE = adattári hiba
NON_PRODUCTION = ez az API-hívás nem engedélyezett korlátozott rendszereken
NON_PRODUCTION = ez az API-hívás nem engedélyezett éles rendszereken
BLOCKCHAIN_NEEDS_SYNC = a blokkláncnak még szinkronizálnia kell
@@ -24,15 +21,15 @@ NO_TIME_SYNC = az óraszinkronizálás még nem történt meg
### Validation ###
INVALID_SIGNATURE = érvénytelen aláírás
INVALID_ADDRESS = érvénytelen fiók cím
INVALID_ADDRESS = érvénytelen fiókcím
INVALID_PUBLIC_KEY = érvénytelen nyilvános kulcs
INVALID_DATA = érvénytelen adat
INVALID_NETWORK_ADDRESS = érvénytelen hálózat cím
INVALID_NETWORK_ADDRESS = érvénytelen hálózatcím
ADDRESS_UNKNOWN = ismeretlen fiók cím
ADDRESS_UNKNOWN = ismeretlen fiókcím
INVALID_CRITERIA = érvénytelen keresési feltétel
@@ -83,4 +80,4 @@ ORDER_SIZE_TOO_SMALL = rendelési összeg túl alacsony
### Data ###
FILE_NOT_FOUND = fájl nem található
NO_REPLY = a másik csomópont nem válaszolt
NO_REPLY = a másik csomópont nem válaszolt

View File

@@ -1,26 +1,22 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# Italian translation by Pabs 2021
# Keys are from api.ApiError enum # Italian translation by Pabs 2021
# La modifica della lingua dell'UI è fatta nel file Settings.json
#
# "localeLang": "it",
# Si prega ricordare la virgola alla fine, se questo comando non è sull'ultima riga
### Common ###
JSON = Impossibile analizzare il messaggio JSON
INSUFFICIENT_BALANCE = insufficient balance
INSUFFICIENT_BALANCE = bilancio insufficiente
UNAUTHORIZED = Chiamata API non autorizzata
REPOSITORY_ISSUE = errore del repositorio
REPOSITORY_ISSUE = errore del repository
NON_PRODUCTION = questa chiamata API non è consentita per i sistemi di produzione
BLOCKCHAIN_NEEDS_SYNC = blockchain deve prima sincronizzarsi
BLOCKCHAIN_NEEDS_SYNC = la blockchain deve sincronizzarsi
NO_TIME_SYNC = nessuna sincronizzazione dell'orologio ancora
NO_TIME_SYNC = nessuna sincronizzazione
### Validation ###
INVALID_SIGNATURE = firma non valida
@@ -39,7 +35,7 @@ INVALID_CRITERIA = criteri di ricerca non validi
INVALID_REFERENCE = riferimento non valido
TRANSFORMATION_ERROR = non è stato possibile trasformare JSON in transazione
TRANSFORMATION_ERROR = non è stato possibile trasformare il JSON
INVALID_PRIVATE_KEY = chiave privata non valida
@@ -62,26 +58,26 @@ TRANSACTION_INVALID = transazione non valida: %s (%s)
NAME_UNKNOWN = nome sconosciuto
### Asset ###
INVALID_ASSET_ID = identificazione risorsa non valida
INVALID_ASSET_ID = risorsa non valida
INVALID_ORDER_ID = identificazione di ordine di risorsa non valida
INVALID_ORDER_ID = ordine di risorsa non valida
ORDER_UNKNOWN = identificazione di ordine di risorsa sconosciuta
ORDER_UNKNOWN = ordine di risorsa sconosciuta
### Groups ###
GROUP_UNKNOWN = gruppo sconosciuto
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = problema nella blockchain esterna o nella rete ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = bilancio insufficiente nella blockchain esterna
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
FOREIGN_BLOCKCHAIN_TOO_SOON = troppo presto per distribuire la transazione (sospensione LockTime/median)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
ORDER_SIZE_TOO_SMALL = quantità d'ordine troppo bassa
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data
NO_REPLY = il peer non ha fornito dati

View File

@@ -41,7 +41,7 @@ INVALID_PRIVATE_KEY = ongeldige private key
INVALID_HEIGHT = ongeldige blokhoogte
CANNOT_MINT = account kan niet munten
CANNOT_MINT = account kan niet minten
### Blocks ###
BLOCK_UNKNOWN = blok onbekend
@@ -68,16 +68,16 @@ ORDER_UNKNOWN = onbekende asset order ID
GROUP_UNKNOWN = onbekende groep
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = blockchain of ElectrumX network probleem
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = onvoldoende saldo blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
FOREIGN_BLOCKCHAIN_TOO_SOON = nog niet gereed om de blockchain transactie uittevoeren (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
ORDER_SIZE_TOO_SMALL = order bedrag te laag
### Data ###
FILE_NOT_FOUND = file not found
FILE_NOT_FOUND = file niet gevonden
NO_REPLY = peer did not reply with data
NO_REPLY = peer reageerd niet met data

View File

@@ -1,83 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ru",
### Common ###
JSON = не удалось разобрать сообщение json
INSUFFICIENT_BALANCE = insufficient balance
UNAUTHORIZED = вызов API не авторизован
REPOSITORY_ISSUE = ошибка репозитория
NON_PRODUCTION = этот вызов API не разрешен для производственных систем
BLOCKCHAIN_NEEDS_SYNC = блокчейн должен сначала синхронизироваться
NO_TIME_SYNC = no clock synchronization yet
### Validation ###
INVALID_SIGNATURE = недействительная подпись
INVALID_ADDRESS = неизвестный адрес
INVALID_PUBLIC_KEY = недействительный открытый ключ
INVALID_DATA = неверные данные
INVALID_NETWORK_ADDRESS = неверный сетевой адрес
ADDRESS_UNKNOWN = неизвестная учетная запись
INVALID_CRITERIA = неверные критерии поиска
INVALID_REFERENCE = неверная ссылка
TRANSFORMATION_ERROR = не удалось преобразовать JSON в транзакцию
INVALID_PRIVATE_KEY = неверный приватный ключ
INVALID_HEIGHT = недопустимая высота блока
CANNOT_MINT = аккаунт не может чеканить
### Blocks ###
BLOCK_UNKNOWN = неизвестный блок
### Transactions ###
TRANSACTION_UNKNOWN = транзакция неизвестна
PUBLIC_KEY_NOT_FOUND = открытый ключ не найден
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = транзакция недействительна: %s (%s)
### Naming ###
NAME_UNKNOWN = имя неизвестно
### Asset ###
INVALID_ASSET_ID = неверный идентификатор актива
INVALID_ORDER_ID = неверный идентификатор заказа актива
ORDER_UNKNOWN = неизвестный идентификатор заказа актива
### Groups ###
GROUP_UNKNOWN = неизвестная группа
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = foreign blokchain or ElectrumX network issue
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = insufficient balance on foreign blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = too soon to broadcast foreign blockchain transaction (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = order amount too low
### Data ###
FILE_NOT_FOUND = file not found
NO_REPLY = peer did not reply with data
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "ru",
### Common ###
JSON = не удалось разобрать сообщение json
INSUFFICIENT_BALANCE = недостаточный баланс
UNAUTHORIZED = вызов API не авторизован
REPOSITORY_ISSUE = ошибка репозитория
NON_PRODUCTION = этот вызов API не разрешен для производственных систем
BLOCKCHAIN_NEEDS_SYNC = блокчейн должен сначала синхронизироваться
NO_TIME_SYNC = пока нет синхронизации часов
### Validation ###
INVALID_SIGNATURE = недействительная подпись
INVALID_ADDRESS = неизвестный адрес
INVALID_PUBLIC_KEY = недействительный открытый ключ
INVALID_DATA = неверные данные
INVALID_NETWORK_ADDRESS = неверный адрес сети
ADDRESS_UNKNOWN = неизвестная учетная запись
INVALID_CRITERIA = неверные критерии поиска
INVALID_REFERENCE = неверная ссылка
TRANSFORMATION_ERROR = не удалось преобразовать JSON в транзакцию
INVALID_PRIVATE_KEY = неверный приватный ключ
INVALID_HEIGHT = недопустимая высота блока
CANNOT_MINT = аккаунт не может чеканить
### Blocks ###
BLOCK_UNKNOWN = неизвестный блок
### Transactions ###
TRANSACTION_UNKNOWN = транзакция неизвестна
PUBLIC_KEY_NOT_FOUND = открытый ключ не найден
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = транзакция недействительна: %s (%s)
### Naming ###
NAME_UNKNOWN = имя неизвестно
### Asset ###
INVALID_ASSET_ID = неверный идентификатор актива
INVALID_ORDER_ID = неверный идентификатор заказа актива
ORDER_UNKNOWN = неизвестный идентификатор заказа актива
### Groups ###
GROUP_UNKNOWN = неизвестная группа
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = проблема с внешним блокчейном или сетью ElectrumX
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = недостаточный баланс на внешнем блокчейне
FOREIGN_BLOCKCHAIN_TOO_SOON = слишком рано для трансляции транзакции во внений блокчей (время блокировки/среднее время блока)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = слишком маленькая сумма ордера
### Data ###
FILE_NOT_FOUND = файл не найден
NO_REPLY = узел не ответил данными

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "sv",
### Common ###
JSON = misslyckades att tolka JSON meddelande
INSUFFICIENT_BALANCE = otillräcklig balans
UNAUTHORIZED = API obehörigt anrop
REPOSITORY_ISSUE = fel i lagret
NON_PRODUCTION = detta API-anrop är inte tillåtet för produktionssystem
BLOCKCHAIN_NEEDS_SYNC = blockchain måste synkroniseras först
NO_TIME_SYNC = ingen klocksynkronisering ännu
### Validation ###
INVALID_SIGNATURE = ogiltig signatur
INVALID_ADDRESS = ogiltig adress
INVALID_PUBLIC_KEY = ogiltig offentlig nyckel
INVALID_DATA = ogiltig data
INVALID_NETWORK_ADDRESS = ogiltig nätverksadress
ADDRESS_UNKNOWN = okänd kontoadress
INVALID_CRITERIA = ogiltiga sökkriterier
INVALID_REFERENCE = ogiltig referens
TRANSFORMATION_ERROR = kunde inte omvandla JSON till en transaktion
INVALID_PRIVATE_KEY = ogiltig privat nyckel
INVALID_HEIGHT = ogiltig blockhöjd
CANNOT_MINT = konto kan inte prägla QORT
### Blocks ###
BLOCK_UNKNOWN = okänt block
### Transactions ###
TRANSACTION_UNKNOWN = okänd transaktion
PUBLIC_KEY_NOT_FOUND = hittade inte en offentlig nyckel
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = ogiltig transaktion: %s (%s)
### Naming ###
NAME_UNKNOWN = okänt namn
### Asset ###
INVALID_ASSET_ID = ogiltigt tillgångs-ID
INVALID_ORDER_ID = ogiltigt tillgångsbeställnings-ID
ORDER_UNKNOWN = okänt tillgångsbeställnings-ID
### Groups ###
GROUP_UNKNOWN = okänd grupp
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = utländsk blockchain eller ElectrumX nätverksproblem
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = otillräcklig balans på utländsk blockchain
FOREIGN_BLOCKCHAIN_TOO_SOON = för tidigt för att sända utländsk blockkedjetransaktion (LockTime/medianblocktid)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = beställningssumman för låg
### Data ###
FILE_NOT_FOUND = filen hittades inte
NO_REPLY = noden svarade inte inom den tillåtna tiden

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "zh_CN",
### Common ###
JSON = 无法解析JSON文件信息
INSUFFICIENT_BALANCE = 钱包余额不足
UNAUTHORIZED = 未授权的API指令
REPOSITORY_ISSUE = 数据库错误
NON_PRODUCTION = 此API指令已被节点禁止
BLOCKCHAIN_NEEDS_SYNC = 请先同步区块链
NO_TIME_SYNC = 同步时间失败
### Validation ###
INVALID_SIGNATURE = 无效的签名
INVALID_ADDRESS = 无效的钱包地址
INVALID_PUBLIC_KEY = 无效的公共密钥
INVALID_DATA = 无效的数据
INVALID_NETWORK_ADDRESS = 无效的网络地址
ADDRESS_UNKNOWN = 未知的钱包地址
INVALID_CRITERIA = 无效的搜寻关键词
INVALID_REFERENCE = 无效的参考资料
TRANSFORMATION_ERROR = 未能将JSON文件转换成交易
INVALID_PRIVATE_KEY = 无效的私人密钥
INVALID_HEIGHT = 无效的区块链高度
CANNOT_MINT = 账号不能铸币
### Blocks ###
BLOCK_UNKNOWN = 未知的区块
### Transactions ###
TRANSACTION_UNKNOWN = 未知的交易
PUBLIC_KEY_NOT_FOUND = 找不到有效的公共密钥
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = 无效的交易: %s (%s)
### Naming ###
NAME_UNKNOWN = 未知的名称
### Asset ###
INVALID_ASSET_ID = 无效的资产ID
INVALID_ORDER_ID = 无效的资产交易ID
ORDER_UNKNOWN = 未知的资产交易ID
### Groups ###
GROUP_UNKNOWN = 未知的群组
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = 其他区块链网络出现异常
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = 请确保钱包余额足够(包含支付网络手续费)
FOREIGN_BLOCKCHAIN_TOO_SOON = 执行动作太快了 (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = 交易数量太少
### Data ###
FILE_NOT_FOUND = 档案不存在
NO_REPLY = 其他节点在指定时间内没有回应

View File

@@ -0,0 +1,83 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# Keys are from api.ApiError enum
# "localeLang": "zh_TW",
### Common ###
JSON = 無法解析JSON文件信息
INSUFFICIENT_BALANCE = 錢包餘額不足
UNAUTHORIZED = 未授權的API指令
REPOSITORY_ISSUE = 數據庫錯誤
NON_PRODUCTION = 此API指令已被節點禁止
BLOCKCHAIN_NEEDS_SYNC = 請先同步區塊鏈
NO_TIME_SYNC = 同步時間失敗
### Validation ###
INVALID_SIGNATURE = 無效的簽名
INVALID_ADDRESS = 無效的錢包地址
INVALID_PUBLIC_KEY = 無效的公共密鑰
INVALID_DATA = 無效的數據
INVALID_NETWORK_ADDRESS = 無效的網絡地址
ADDRESS_UNKNOWN = 未知的錢包地址
INVALID_CRITERIA = 無效的搜尋關鍵詞
INVALID_REFERENCE = 無效的參考資料
TRANSFORMATION_ERROR = 未能將JSON文件轉換成交易
INVALID_PRIVATE_KEY = 無效的私人密鑰
INVALID_HEIGHT = 無效的區塊鏈高度
CANNOT_MINT = 賬號不能鑄幣
### Blocks ###
BLOCK_UNKNOWN = 未知的區塊
### Transactions ###
TRANSACTION_UNKNOWN = 未知的交易
PUBLIC_KEY_NOT_FOUND = 找不到有效的公共密鑰
# this one is special in that caller expected to pass two additional strings, hence the two %s
TRANSACTION_INVALID = 無效的交易: %s (%s)
### Naming ###
NAME_UNKNOWN = 未知的名稱
### Asset ###
INVALID_ASSET_ID = 無效的資產ID
INVALID_ORDER_ID = 無效的資產交易ID
ORDER_UNKNOWN = 未知的資產交易ID
### Groups ###
GROUP_UNKNOWN = 未知的群組
### Foreign Blockchain ###
FOREIGN_BLOCKCHAIN_NETWORK_ISSUE = 其他區塊鏈網絡出現異常
FOREIGN_BLOCKCHAIN_BALANCE_ISSUE = 請確保錢包餘額足夠(包含支付網絡手續費)
FOREIGN_BLOCKCHAIN_TOO_SOON = 執行動作太快 (LockTime/median block time)
### Trade Portal ###
ORDER_SIZE_TOO_SMALL = 交易數量太少
### Data ###
FILE_NOT_FOUND = 檔案不存在
NO_REPLY = 其他節點在指定時間内沒有回應

View File

@@ -1,10 +1,10 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
AUTO_UPDATE = Automatisches Update
APPLYING_UPDATE_AND_RESTARTING = Automatisches Update anwenden und neu starten …
AUTO_UPDATE = Automatisches Update
BLOCK_HEIGHT = height
BUILD_VERSION = Build-Version
@@ -23,6 +23,8 @@ DB_BACKUP = Datenbank Backup
DB_CHECKPOINT = Datenbank Kontrollpunkt
DB_MAINTENANCE = Datenbank Instandhaltung
EXIT = Verlassen
MINTING_DISABLED = NOT minting
@@ -33,6 +35,8 @@ OPEN_UI = Öffne UI
PERFORMING_DB_CHECKPOINT = Speichern nicht übergebener Datenbank Änderungen …
PERFORMING_DB_MAINTENANCE = Planmäßige Wartung durchführen...
SYNCHRONIZE_CLOCK = Synchronisiere Uhr
SYNCHRONIZING_BLOCKCHAIN = Synchronisierung

View File

@@ -1,10 +1,10 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
AUTO_UPDATE = Auto Update
APPLYING_UPDATE_AND_RESTARTING = Applying automatic update and restarting...
AUTO_UPDATE = Auto Update
BLOCK_HEIGHT = height
BUILD_VERSION = Build version
@@ -21,10 +21,10 @@ CREATING_BACKUP_OF_DB_FILES = Creating backup of database files...
DB_BACKUP = Database Backup
DB_MAINTENANCE = Database Maintenance
DB_CHECKPOINT = Database Checkpoint
DB_MAINTENANCE = Database Maintenance
EXIT = Exit
MINTING_DISABLED = NOT minting

View File

@@ -0,0 +1,44 @@
#Generated by ResourceBundle Editor (http://essiembre.github.io/eclipse-rbe/)
# SysTray pop-up menu
APPLYING_UPDATE_AND_RESTARTING = Aplicando actualización automática y reiniciando...
AUTO_UPDATE = Actualización automática
BLOCK_HEIGHT = altura
BUILD_VERSION = Versión de compilación
CHECK_TIME_ACCURACY = Comprobar la precisión del tiempo
CONNECTING = Conectando
CONNECTION = conexión
CONNECTIONS = conexiones
CREATING_BACKUP_OF_DB_FILES = Creando una copia de seguridad de los archivos de la base de datos...
DB_BACKUP = Copia de seguridad de la base de datos
DB_CHECKPOINT = Punto de control de la base de datos
DB_MAINTENANCE = Mantenimiento de la base de datos
EXIT = Salir
MINTING_DISABLED = Acuñación NO habilitada
MINTING_ENABLED = \u2714 Acuñación habilitada
OPEN_UI = IU abierta
PERFORMING_DB_CHECKPOINT = Guardando cambios de base de datos no confirmados...
PERFORMING_DB_MAINTENANCE = Realizando mantenimiento programado...
SYNCHRONIZE_CLOCK = Sincronizar reloj
SYNCHRONIZING_BLOCKCHAIN = Sincronizando
SYNCHRONIZING_CLOCK = Sincronizando reloj

Some files were not shown because too many files have changed in this diff Show More