Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.
org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.
BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.
Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.
Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.
With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.
Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.
Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases
To test, change your peer's version (in pom.xml?) to v3.2.203.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
This allows for compatibility with TRANSFER_PRIVS validation in commit 8950bb7, which treats any account with a non-null reference as "existing". It also avoids possible unknown side effects from trying to process and store transactions with a null reference - something that wouldn't have been possible until the validation was removed.
This should prevent the failed transactions that are encountered when issuing two or more in a short space of time. Using a feature trigger (hard fork) to release this, to avoid potential consensus confusion around the time of the update (older versions could consider the main chain invalid until updating).
Currently, new transactions take a very long time to be included in each block (or reach the intended recipient), because each node has to obtain a repository lock and import the transaction before it notifies its peers. This can take a long time due to the lock being held by the block minter or synchronizer, and this compounds with every peer that the transaction is routed through.
Validating signatures doesn't require a lock, and so can take place very soon after receipt of a new transaction. This change causes each node to broadcast a new transaction to its peers as soon as its signature is validated, rather than waiting until after the import.
When a notified peer then makes a request for the transaction data itself, this can now be loaded from the sig-valid import queue as an alternative to the repository (since they won't be in the repository until after the import, which likely won't have happened yet).
One small downside to this approach is that each unconfirmed transaction is now notified twice - once after the signature is deemed valid, and again in Controller.onNewTransaction(), but this should be an acceptable trade off given the speed improvements it should achieve. Another downside is that it could cause invalid transactions (with valid signatures) to propagate, but these would quickly be added to each peer's invalidUnconfirmedTransactions list after the import failure, and therefore be ignored.
Importing has to be single threaded since it requires the database lock, but there's nothing to stop us from validating signatures on multiple threads, as no lock is required. So it makes sense to separate these two functions to allow for possible multi threaded signature validation in the future, to speed up the process.
Everything remains single threaded in this commit. It should be functionally the same as before, to reduce risk.
Note: it's important that this timestamp is set on a 1-hour boundary (such as 16:00:00) to ensure a clean switchover.
# Conflicts:
# src/main/java/org/qortal/block/BlockChain.java
Also removed CrossChainDigibyteACCTv1Resource, since this is unused, and it seems excessive to maintain support of this for every coin (and potentially every ACCT version).
Direct connections for arbitrary data are currently unlikely to succeed, because those allowing incoming connections generally have their slots maxed out and have reached maxPeers. The idea here is that some connections remain reserved for dedicated arbitrary data transfers, therefore temporarily circumventing the limit (up to a defined maximum number of reserved connections).
Arbitrary data connections will auto disconnect after 2 minutes (we might be able to reduce this at a later date), and it also probably makes sense for the requesting node to disconnect as soon as it has all the chunks that it needs (this part isn't implemented yet).
One downside of this feature is that the listen socket is now going to be accepting connections most of the time, since it is unlikely that we will regularly have 4 data peers connected. This could be improved by modifying the OP_ACCEPT behaviour based on whether we are expecting any data peers to connect. In most cases, this would allow it to remain closed. But for the sake of simplicity I will leave that optimization for a future commit.
This is used to force a quick disconnect for peers that are only connecting for the purposes of requesting data for a specific arbitrary transaction signature.
BlockMessage was broken because the repository 'connection' associated with the message's Block object was closed between message queuing and message sending.
The fix was to serialize Message subclasses on construction, thus freeing reliance on objects passed into constructor.
The serialized byte[] is held by the message between queuing and sending.
This forces messages into one of two 'modes': outgoing or incoming.
Outgoing messages contain serialized byte[] whereas incoming messages unpack a ByteBuffer into Message subclass fields.
As a result, all network message types have been refactored in this way.
More details in Message's class comment.
A knock-on effect is that incoming messages cannot then be sent out - a new message needs to be constructed.
Some changes needed to Arbitrary controller package classes in this respect.
Bonus: Network no longer needs broadcast threads because 'broadcasting' is now simply the act of queuing a message for many peers.