This rewrite may have been causing problems with connections in the network, due to peers being forgotten too easily. Reverting for now to see if it solves the problem.
This reverts commit d81071f254.
# Conflicts:
# src/main/java/org/qortal/network/Network.java
If this proves to not have any significant bad effects on re-orgs, we could consider setting these even higher or even disabling the auto disconnect by default.
This allows users to increase their default birthday if they know that no wallets were created before a certain block, to reduce sync time. It also fixed some failed unit tests that relied on transactions between blocks 1900000 and 2000000.
Using a hardcoded signature ensures that the libraries cannot be swapped out without a core auto update, which requires the standard dev team approval process.
This replaces the previously hardcoded "numberOfAdditionalBatchesToSearch" variable, and specifies the minimum number of empty consecutive addresses required before a set of wallet transactions is considered complete. Used for foreign transaction lists and balances.
This is a simple way to discard the 5-minute online account timestamps (from out of date nodes) once the switch to 30-minute online account timestamps has taken place.
Although BlockMinter could reattach a repository session to its cache of potential blocks,
and these blocks would in turn reattach that repository session to their transactions,
further transaction-specific fields (e.g. creator PublicKeyAccount) were not being updated.
This would lead to NPEs like the following:
Exception in thread "BlockMinter" java.lang.NullPointerException
at org.qortal.repository.hsqldb.HSQLDBRepository.cachePreparedStatement(HSQLDBRepository.java:587)
at org.qortal.repository.hsqldb.HSQLDBRepository.prepareStatement(HSQLDBRepository.java:569)
at org.qortal.repository.hsqldb.HSQLDBRepository.checkedExecute(HSQLDBRepository.java:609)
at org.qortal.repository.hsqldb.HSQLDBAccountRepository.getBalance(HSQLDBAccountRepository.java:327)
at org.qortal.account.Account.getConfirmedBalance(Account.java:72)
at org.qortal.transaction.MessageTransaction.isValid(MessageTransaction.java:200)
at org.qortal.block.Block.areTransactionsValid(Block.java:1190)
at org.qortal.block.Block.isValid(Block.java:1137)
at org.qortal.controller.BlockMinter.run(BlockMinter.java:301)
where the Account has an associated repository session which is now obsolete.
This commit reverts BlockMinter back to obtaining a repository session before entering main loop.
To prevent a single or very small number of minters receiving the rewards for an entire tier, share bins can now require "activation". This adds the requirement that a minimum number of accounts must be present in a share bin before it is considered active. When inactive, the rewards and minters are added to the previous tier.
Summary of new functionality:
- If a share bin has more than one, but less than 30 accounts present, the rewards and accounts are shifted to the previous share bin.
- This process is iterative, so the accounts can shift through multiple tiers until the minimum number of accounts is met, OR the share bin's starting level is less than shareBinActivationMinLevel.
- Applies to level 7+, so that no backwards support is needed. It will only take effect once the first account reaches level 7.
This requires hot swapping the sharesByLevel data to combine tiers where needed, so is a considerable shift away from the immutable array that was in place previously.
All existing and new unit tests are now passing, however a lot more testing will be needed.
Online account credit is a more useful definition of "minting" than block signing, from the user's perspective. Should bring UI minting/syncing status in line with the core's systray status.
- Reduce concurrent reward share limit from 6 to 3 (or from 5 to 2 when including self share) - as per community vote.
- Founders remain at 6 (5 when including self share) - also decided in community vote.
- When all slots are being filled, require that at least one is a self share, so that not all can be used for sponsorship.
- Activates at future undecided timestamp.
We already mark peers as misbehaved if they returned invalid signatures, but this wasn't sufficient when multiple copies of the same invalid block exist on the network (e.g. after a hard fork). In these cases, we need to be more proactive to avoid syncing with these peers, to increase the chances of preserving other candidate blocks.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
- Show "Minting" as long as online accounts are submitted to the network (previously it related to block signing).
- Fixed bug causing it to regularly show "Synchronizing 100%".
- Only show "Synchronizing" if the chain falls more than 2 hours behind - anything less is unnecessary noise.
Symptoms are:
* AutoUpdate trying to run new ApplyUpdate process, but nothing appears in log-apply-update.?.txt
* Main qortal.jar process continues to run without updating
* Last AutoUpdate line in log.txt.? is:
2022-06-18 15:42:46 INFO AutoUpdate:258 - Applying update with: /usr/local/openjdk11/bin/java -Djava.net.preferIPv4Stack=false -Xss256k -Xmx1024m -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:5005 -cp new-qortal.jar org.qortal.ApplyUpdate
Changes are:
* child process now inherits parent's stdout / stderr (was piped from parent)
* child process is given a fresh stdin, which is immediately closed
* AutoUpdate now converts -agentlib JVM arg to -DQORTAL_agentlib
* ApplyUpdate converts -DQORTAL_agentlib to -agentlib
The latter two changes are to prevent a conflict where two processes try to reuse the same JVM debugging port number.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
Bump v3 min peer version from 3.2.203 to 3.3.203
No need for toOnlineAccountTimestamp(long) as we only ever use getCurrentOnlineAccountTimestamp().
Latter now returns Long and does the call to NTP.getTime() on behalf of caller, removing duplicated NTP.getTime() calls and null checks in multiple callers.
Add aggregate-signature feature-trigger timestamp threshold checks where needed, near sign() and verify() calls.
Improve logging - but some logging will need to be removed / reduced before merging.
Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.
org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.
BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.
Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.
Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.
With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.
Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.
Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases
To test, change your peer's version (in pom.xml?) to v3.2.203.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
A full sync is unavoidable for P2SH redeem/refund, so we need to be able to save our progress. Creating a new null seed wallet each time isn't an option because it relies on having a recent checkpoint to avoid having to sync large amounts of blocks every time (sync is per wallet, not per node).
This allows for compatibility with TRANSFER_PRIVS validation in commit 8950bb7, which treats any account with a non-null reference as "existing". It also avoids possible unknown side effects from trying to process and store transactions with a null reference - something that wouldn't have been possible until the validation was removed.
This should prevent the failed transactions that are encountered when issuing two or more in a short space of time. Using a feature trigger (hard fork) to release this, to avoid potential consensus confusion around the time of the update (older versions could consider the main chain invalid until updating).
This will hopefully reduce the number of failed tradeoffer listings that result in a nonfunctional tradebot (and subsequent PENDING status shown in the UI)
This is needed to allow redeem/refund of P2SH without having an actively synced and initialized wallet. It also ultimately avoids us having to retain the wallet entropy in the trade bot states. Various safety checks have been introduced to make sure that a disposable wallet is never used for anything other than P2SH redeem/refund.
This has been modified to a) use full public keys instead of PKH, and b) hand off all transaction building, signing, and broadcasting to the (heavily customized) Pirate light wallet library.
Currently, new transactions take a very long time to be included in each block (or reach the intended recipient), because each node has to obtain a repository lock and import the transaction before it notifies its peers. This can take a long time due to the lock being held by the block minter or synchronizer, and this compounds with every peer that the transaction is routed through.
Validating signatures doesn't require a lock, and so can take place very soon after receipt of a new transaction. This change causes each node to broadcast a new transaction to its peers as soon as its signature is validated, rather than waiting until after the import.
When a notified peer then makes a request for the transaction data itself, this can now be loaded from the sig-valid import queue as an alternative to the repository (since they won't be in the repository until after the import, which likely won't have happened yet).
One small downside to this approach is that each unconfirmed transaction is now notified twice - once after the signature is deemed valid, and again in Controller.onNewTransaction(), but this should be an acceptable trade off given the speed improvements it should achieve. Another downside is that it could cause invalid transactions (with valid signatures) to propagate, but these would quickly be added to each peer's invalidUnconfirmedTransactions list after the import failure, and therefore be ignored.
Importing has to be single threaded since it requires the database lock, but there's nothing to stop us from validating signatures on multiple threads, as no lock is required. So it makes sense to separate these two functions to allow for possible multi threaded signature validation in the future, to speed up the process.
Everything remains single threaded in this commit. It should be functionally the same as before, to reduce risk.
Note: this relies on (a modified version of) liblitewallet-jni which is not included, but will ultimately be compiled for each supported architecture and hosted on QDN.
LiteWalletJni code is based on https://github.com/PirateNetwork/cordova-plugin-litewallet - thanks to @CryptoForge for the help in getting this up and running.
Note: it's important that this timestamp is set on a 1-hour boundary (such as 16:00:00) to ensure a clean switchover.
# Conflicts:
# src/main/java/org/qortal/block/BlockChain.java
Also removed CrossChainDigibyteACCTv1Resource, since this is unused, and it seems excessive to maintain support of this for every coin (and potentially every ACCT version).
Direct connections for arbitrary data are currently unlikely to succeed, because those allowing incoming connections generally have their slots maxed out and have reached maxPeers. The idea here is that some connections remain reserved for dedicated arbitrary data transfers, therefore temporarily circumventing the limit (up to a defined maximum number of reserved connections).
Arbitrary data connections will auto disconnect after 2 minutes (we might be able to reduce this at a later date), and it also probably makes sense for the requesting node to disconnect as soon as it has all the chunks that it needs (this part isn't implemented yet).
One downside of this feature is that the listen socket is now going to be accepting connections most of the time, since it is unlikely that we will regularly have 4 data peers connected. This could be improved by modifying the OP_ACCEPT behaviour based on whether we are expecting any data peers to connect. In most cases, this would allow it to remain closed. But for the sake of simplicity I will leave that optimization for a future commit.
This is used to force a quick disconnect for peers that are only connecting for the purposes of requesting data for a specific arbitrary transaction signature.
BlockMessage was broken because the repository 'connection' associated with the message's Block object was closed between message queuing and message sending.
The fix was to serialize Message subclasses on construction, thus freeing reliance on objects passed into constructor.
The serialized byte[] is held by the message between queuing and sending.
This forces messages into one of two 'modes': outgoing or incoming.
Outgoing messages contain serialized byte[] whereas incoming messages unpack a ByteBuffer into Message subclass fields.
As a result, all network message types have been refactored in this way.
More details in Message's class comment.
A knock-on effect is that incoming messages cannot then be sent out - a new message needs to be constructed.
Some changes needed to Arbitrary controller package classes in this respect.
Bonus: Network no longer needs broadcast threads because 'broadcasting' is now simply the act of queuing a message for many peers.
Instead of synchronizing/blocking in Peer.sendMessage(),
we queue messages in a concurrent blocking TransferQueue, with timeout.
In EPC, ChannelWriteTasks consume from TransferQueue, unblocking callers to Peer.sendMessage().
If a new message is to be sent, or socket output buffer is full,
then OP_WRITE is used to wait for socket to become writable again.
Only one ChannelWriteTask per peer can be active/pending at a time.
Each ChannelWriteTask tries to send as much as it can in one go.
Other minor tidy-ups.
As per work done by szisti in PR#45:
Extracted network 'Tasks' to their own classes.
Network.NetworkProcessor reduced to only producing Tasks.
Improved usage of SocketChannel interest-ops.
Eventually this might lead to reducing task-producing synchronization lock into more granular locks.
Work still needed to convert sending messages to a queue and to make use of OP_WRITE instead of sleeping to wait for socket buffer to empty.
Disabled PeerConnectTask producer from checking against connected peers via DNS as it's too slow.
Swapped Peer's replyQueues from SynchronizedMap(wrapped HashMap) to ConcurrentHashMap.
Other minor changes within networking.
As per work done by szisti in PR#45:
Extracted MessageException from inside Message into its own class.
Extracted MessageType from inside Message into its own class.
Converted reflection-based Message.fromByteBuffer method call to non-reflection, functional interface, method-reference.
This should have minor performance improvement but stronger method signature and type enforcement, as well as better IDE integration.
Message.fromByteBuffer method 'contract' tightened up to:
1. throw BufferUnderflowException if there are not enough bytes to deserialize message
2. throw MessageException if bytes contain invalid data
3. should not return null
Message.toData method 'contract' tightened up to:
1. return null if the message has no payload to serialize
2. throw IOException directly - no need to try-catch in each subclass
Several Message-subclass fields now marked 'final' as per IDE suggestion.
Several Message-subclass fromByteBuffer() method signatures have changed 'throws' list.
Several bytes.remaining() != some-value changed to bytes.remaining() < some-value as per new contract.
Some bytes.remaining() checks removed for fixed-length messages because we can rely on ByteBuffer throwing BufferUnderflowException.
Some bytes.remaining() checks retained for variable-length messages, or messages that read a large amount of data, to prevent wasted memory allocations.
Other minor tidying up
Temporarily increase sleep from 1ms to 100ms when waiting for outgoing socket buffer to empty.
Real fix is to rewrite using an outgoing message queue and OP_WRITE interest op.
De-register a peer's socket channel OP_READ interest op when producing a ChannelTask for that peer.
This should prevent duplicate ChannelTasks for the same peer.
Re-register OP_READ once node has read from peer's channel.
When node has reached max connections, Network will ignore pending incoming connections by:
1. not calling accept()
2. de-registering OP_ACCEPT 'interest op' on the listen socket's channel
When a peer disconnects, Network might re-register OP_ACCEPT interest op on listen socket.
Slight reworking of EPC to simplify when producer can block
and generally make some of the conditional code more readable.
Improved logging with task class names and logging level editable during runtime!
Use /peer/enginestats?newLoggingLevel=DEBUG (or TRACE or back to INFO) to change.
Note that it is currently not easy to distinguish between MESSAGE-type and PAYMENT-type AT transactions, so PAYMENT-type is currently the only one supported (and used). A hard fork will likely be needed in order to specify the type within each message.
This is a more standardized alternative to using GET /transactions/search?address=xyz. This avoids the need to build full transaction search ability into the lite node protocols right away.
This should bring in enough data for very basic chat and wallet functionality (using addresses rather than registered names).
Data currently comes from a single random peer, however this can be expanded to request from multiple peers to gain confidence in the accuracy of the data. If bad data is returned from a peer, it's not the end of the world since the transaction would just be considered invalid by full nodes and would be thrown out. But this should be mostly avoidable by taking data from multiple sources to improve confidence in its accuracy.
Lite nodes can't sync or mint blocks, and they also have a very limited ability to verify unconfirmed transactions due to a lack of contextual information (i.e. the blockchain). For now, most validation is skipped and they simply act as relays to help get transactions around the network. Full and topOnly nodes will disregard any invalid transactions upon receipt as usual, and since the lite nodes aren't signing any blocks, there is little risk to the reduced validation, other than the experience of the lite node itself. This can be tightened up considerably as the lite nodes become more powerful, but the current approach works as a PoC.
This is currently for name registration transactions only, but can be adapted (or duplicated) for other transaction types when needed.
Note: this switches from a greater-than (>) to a greater-than-or-equal (>=) timestamp comparison, as it makes more sense this way. It shouldn't affect the previous transition since there were no REGISTER_NAME transactions at that exact timestamp.
Adapted from code originally written by catbref from before genesis, and essentially prevents syncing backwards. This needs significant testing on testnet.
It is quite likely that existing resources with both metadata and an empty chunks array will need to be republished, because this bug may have led to incorrect file deletions.
The command used was:
./protoc --plugin=protoc-gen-grpc-java=/Users/user/Downloads/protoc-gen-grpc-java-1.45.1-osx-x86_64.exe -I=src/main/resources/proto/zcash/ --java_out=src/main/java/ --grpc-java_out=src/main/java/ src/main/resources/proto/zcash/service.proto
Then repeat, replacing service.proto with compact_formats.proto and darkside.proto
Darkside isn't needed for mainnet functionality, but included for completeness, and might be useful for testing.
This feature is disabled by default so can be tidied up later. For now, the unhandled scenario is logged and the checking continues on.
One name's transactions are too complex for the current integrity check code to verify (MangoSalsa), but it has been verified manually. All other names pass the automated test.
If an account is renamed and then at some point renamed back to one of the original names, it confused the names rebuilding code. The current solution is to track the linked names that have already been rebuilt, and then break out of the loop once a name is encountered a second time.
This is the likely cause of inconsistent name entries across different nodes, as we can't guarantee that every environment will return the same transaction order from the SQL queries.
Some users are seeing 500 errors deriving from this code. This should hopefully allow more info to be obtained, as well as causing it to omit the status for resources that encounter problems.
Peers without a recent block are removed at the start of the sync process, however, due to the time lag involved in fetching block summaries and comparing the list of peers, some of these could subsequently drop back to a non-recent block and still be chosen as the next peer to sync with. The end result being that nodes could unnecessarily orphan as many as 20 blocks due to syncing with a peer that doesn't have a recent block (but has a couple of high weight blocks after the common block).
This commit adds some additional filtering to avoid this situation.
1) Peers without a recent block are removed as candidates in comparePeers(), allowing for alternate peers to be chosen.
2) After comparePeers() completes, the list is filtered a second time to make sure that all are still recent.
3) Finally, the peer's state is checked one last time in syncToPeerChain(), just before any orphaning takes place.
Whilst just one of the above would probably have been sufficient, the consequences of this bug are so severe that it makes sense to be very thorough.
The only exception to the above is when the node is in "recovery mode", in which case peers without recent blocks are allowed to be included. Items 1 and 3 above do not apply in recovery mode. Item 2 does apply, since the entire comparePeers() functionality is already skipped in a recovery situation due to our chain being out of date.
Fix UPDATE_NAME not processing empty 'newName' transactions correctly.
Fix some emoji code-points not being processed correctly.
Updated tests.
Now included ICU4J v70.1 - WARNING: this could add around 10MB to JAR size!
Bumped homoglyph to v1.2.1.
This should hopefully reduce confusion due to APIs reporting 99% synced even though up to date. The systray should never show this since it already treats blocks in the last 30 mins as synced.
This could very slightly reduce load due to skipping the internal filtering inside log4j. Given that this method is causing major problems with CPU at times, I'm trying to make it as optimized as possible.
This can ultimately be used to notify the serving peer to expect a direct connection from the requesting peer (to allow it to temporarily bypass maxConnections for long enough for the files to be retrieved). Or it could even possibly be used to trigger a reverse connection (from the serving peer to the requesting peer).
- Slow down loops that query the db
- Check for new metadata every 5 minutes instead of constantly
- Check for new data every 1 minute instead of constantly
This could be further improved in the future by having block.process() notify the ArbitraryDataManager that there is new data to process. This would avoid the need for the frequent checks/loops, and only a single complete sweep would be needed on node startup (as long as failures are then retried). But I will avoid this additional complexity for now.
Load sorted list of reward share public keys into memory, so that the indexes can be obtained. This is around 100x faster than querying each index separately (and the savings will increase as more keys are added).
For 4150 reward share keys, it was taking around 5000ms to query individually, vs 50ms using this approach.
The main trade off is that these 4150 keys require around 130kB of additional memory when minting (and this will increase proportionally with more minters). However, this one query was often accounting for 50% of the entire core's CPU usage, so the additional memory usage seems insignificant by comparison.
To gain confidence, I ran both old and new approaches side by side, and confirmed that the indexes matched exactly.
There are no logic changes here other than moving performOnlineAccountsTasks() onto its own thread, so that it's not subject to anything that might be slowing down the main controller thread.
- Removed synchronization from connectedPeers, and replaced it with an unmodifiableList.
- Added additional immuatable caches: handshakedPeers and outboundHandshakedPeers
This should greatly reduce the amount of time spent waiting around for access to the connectedPeers array, since it is now immediately accessible without needing to obtain a lock. It also removes calls to stream() which were consuming large amounts of CPU to constantly filter the connected peers down to a list of handshaked peers.
Thanks to @catbref for these great suggestions.
- Signature validation is now able to run concurrently with synchronization, to reduce the chances of the queue building up, and to speed up the propagation of new transactions. There's no need to break out of the loop - or avoid looping in the first place - since signatures can be validated without holding the blockchain lock.
- A blockchain lock isn't even attempted if a sync request is pending.
Main changes are:
* Check transaction signature validity in initial round, without blockchain lock
* Convert List of incoming transactions to Map so we can record whether we have validated transaction signature before to save rechecking effort
* Add invalid signature transactions to invalidUnconfirmedTransactions map with INVALID_TRANSACTION_RECHECK_INTERVAL expiry (~60min)
* Other minor changes related to List->Map change and Java object synchronization
This is very inefficient and will soon be replaced with dedicated ArbitraryResources / ArbitraryMetadata tables. But this is acceptable in the short term, especially if limit and offset are used.
- Rate limiter is disabled when using the API
- fetchArbitraryMetadata() returns the actual metadata content rather than a boolean
- Exceptions are thrown on certain errors, rather than returning null
This involved a slight rewrite to remove the "includeMetadataOnly" boolean. Metadata is now always excluded, otherwise it complicates the caching too much.
# Conflicts:
# src/main/java/org/qortal/api/resource/ArbitraryResource.java
# src/main/java/org/qortal/controller/arbitrary/ArbitraryDataStorageManager.java
Peers that were thought to be missing output address data may actually have just been using a different key - "address" instead of "addresses". Now reading the addresses from both keys, which may remove the need for the previously added checks.
For future reference, the command used was:
mvn install:install-file -Dfile=/Users/user/Downloads/waifupnp-1.1/WaifUPnP.jar -DgroupId=com.dosse -DartifactId=WaifUPnP -Dversion=1.1 -Dpackaging=jar -DlocalRepositoryPath=lib
This is the equivalent of increasing the max address gap from 15 to 21. The electrum standalone wallet uses 20, so this should be the most we will ever need.
/render APIs use priority 10, whereas /arbitrary use priority 0, to prevent thumbnail downloads from holding up website loading. The priorities can be adjusted later, with maybe some service types being given higher priority than others.
This should fix an issue where network threads could be blocked when new transactions arrived, due to waiting for the incomingTransactions lock to free up.
An alternate option would be to avoid force disconnecting while relays are in progress, but some nodes could have active relays 100% of the time and therefore would never recycle their peers. So it is simpler to just increase the average peer connection time for everyone.
Previously, only one peer's response for a hash would be remembered, even if multiple others reported back too. This would cause useful mapping to be lost.
This is likely a short term solution (to allow existing code to be repurposed) until replaced with a task-based approach, as this will allow for a much greater number of threads.
async = fail immediately with 404 if missing, and request in the background
attempts = the number of times to request the data (synchronous mode only for now)
This allows TIMESTAMP_TOO_OLD transactions to be tracked for a shorter time (10 minutes) than the other invalid transactions (60 minutes). Should reduce network traffic and db load around the time that transactions are expiring, as there is a lag before they are noticed and removed from each node. Due to the variance, it could cause other peers to request them again after deleting. They are now ignored for 10 minutes to avoid request spam.
An incoming invalid unconfirmed transaction will be added to this map if its timestamp is more than 30 minutes old. This should allow enough time and opportunities for it to be imported and included in a block (allowing for re-orgs which could switch its status from invalid to valid).
Once added, it will be removed after an hour to allow for another chance to be requested from any peers that still have it. If invalid again, it's added back to the map for another hour.
This fixes a 24 hour long loop, where invalid transactions are requested over and over from peers that have already imported them. It could be improved further by periodically removing invalid unconfirmed transactions from the database, but this will be a higher risk.
The results of this feature should be less network traffic, and less blockchain locks (which should ultimately increase the responsiveness of the synchronizer).
This ensures that only a single round of requests (per coin) is used for the wallettransactions and balance APIs. It also speeds up loading on subsequent requests. The 2 minute cache isn't much longer than the foreign block times, so shouldn't cause values to be too out of date.
Hopeful fix for incorrect balances in wallets with large numbers of transactions. At the very least, this gives us control of the code that calculates the balance.
Previously we would only try the first response and then discard the others due to being duplicates. They are now added to a queue and retried by the dedicated thread (up to the 60 second timeout).
This should fix conflicts caused by the synchronizer and controller now being on separate threads. It may also reduce the chances of the database corrupting on shutdown, but this remains to be seen.
This solves a problem where incoming transactions could rarely obtain a blockchain lock (due to multiple transactions arriving at once) and therefore most messages were thrown away. It was also causing constant blockchain locks to be acquired, which would often prevent the synchronizer from running.
Additional params:
- timestamp: to allow for hard forks. Default: the current time
- level: the account level, to allow for the future possibility of different fees per level. Not currently used.
This is to hopefully improve network stability whilst a more advanced solution is being worked on. It also allows us to collect some data on how well the network behaves when there are less block candidates. It should have no effect on minting rewards (other than any side effects as a result of improved network stability).
This allows the GET /arbitrary/{service}/{name} and GET /{service}/{name}/{identifier} endpoints to operate without any authentication. Useful for those who are running public QDN nodes and need to serve data over http(s).
Increased GetOnlineAccountsMessage.MAX_ACCOUNT_COUNT from 1000 to 5000.
The V2 versions are more efficiently encoded and also cache the payload bytes
which reduces CPU when sending to multiple peers.
Serialization / deserialization unit tests included.
Tentative V2 message activation set at core version 3.1.2
see Controller.ONLINE_ACCOUNTS_V2_PEER_VERSION
Note that this is not always accurate - it relates to the largest transaction size for this name, not necessarily the latest or the combined size of multiple transactions. This can be made accurate as soon as we have a "Resources" table to store this info. Trying to do it before then will be too inefficient in terms of queries.
A longer term solution to this hypothetical problem is to store relays in RAM or a temp folder only. Or maybe add an indicator file to instruct the cleanup manager to delete it. But this will require more development. 10 deletion attempts (each 1 second apart) should be enough for now.
This encourages shorter relays, since longer ones will take more time to respond, and also prevents a peer from intentionally taking a long time to respond so that it overwrites an existing entry.
Longer term we could consider keeping track of all respondents for each hash, if there are still issues with data retrieval. I suspect this won't be needed though, as the requesting peer has 16+ different peers connected, and therefore potentially 16 different mappings already.
This brings the behaviour closer to the old version so should hopefully reduce the amount of newly introduced issues. If an API key is unavailable, it will fall back to using `kill -15 $pid` (i.e. a SIGTERM).
Should fix issue with v4 transactions where these aren't used. Matches with the NOT NULL DEFAULT 0 which automatically transitions existing v4 ARBITRARY transactions to use the same defaults.
This allows node operators to return their authentication to the legacy rules (local requests allowed), without introducing javascript vulnerabilities. The websites, apps, etc are just prevented from loading, to avoid the risk of any API calls from javascript.
The modifications made to these methods were causing issues with other transaction types that were expecting blank strings instead of null. To keep risk to a minimum, I have split into two different sets of functions until there is more time to unify them.
1) Each relay request expires after 5 seconds, after which nodes will stop relaying it, preventing any kind of infinite loop. So it has to reach the destination peer within 5 seconds. This should be fine, because the original peer's request would timeout anyway, so there's nothing to be gained by continuing to relay it.
2) Each relay request stops being forwarded after 3 "hops" - i.e. once it has been relayed through 3 different peers, it will no longer be transmitted any further. If we assume that each node has 16 connections, that allows it to reach a theoretical maximum of 4096 peers in 3 hops. In practice it will be less, and may not reach everyone due to peer "islands". But it will automatically retry a few times on a timer, so should hopefully find what it needs eventually. Plus, it still has the ability to make a direct connection to anyone hosting the data, as long as they are port forwarded.
This is likely longer than needed, but it's best to allow extra for now and then optimize the timeouts once we've had some experience with real world data.
This involves modifying the log4j2.properties file on node startup to fix an incompatibility with ${dirname:-}. Thanks to AlphaX Projects for tracking down this incompatibility.
This allows an entire registered name to be preauthorized, therefore allowing for instance a website to automatically request other resources from the same author, such as videos.
This delegates the task to the browser rather than doing it in java. It should also catch a few remaining types of links that we had missed - e.g. ones that originate from within js files.
This avoids duplicate entries from the same host/ip with differing ports. This can occur due to some requests using ephemeral port numbers. Ideally we would filter these out altogether, but this at least acts as a safety net to prevent a very cluttered db and associated "broadcast storm". The main tradeoff here is that multiple nodes on the same IP address will be recorded as a single entry. This doesn't seem like it will be a major limitation, because one of them will remain available.
Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
This allows for consistent messaging about each status to be shown in different parts of the system. Previously these strings were hardcoded in the loading screen html so were inaccessible elsewhere.
Logs can be reinstated by adding these lines to log4j2.properties:
logger.arbitrary.name = org.qortal.arbitrary
logger.arbitrary.level = debug
logger.arbitrarycontroller.name = org.qortal.controller.arbitrary
logger.arbitrarycontroller.level = debug
This is the start of a refactor to use ArbitraryDataResource objects rather than passing around separate resourceId, service and identifier, or other duplicate objects. Most of this will need to be done after the initial release due to time constraints.
The simplest solution was to only include a newline at the end of the patch file if the source file ended with a newline. This is used to inform the merge code as to whether to add the newline to the end of the resulting file. Without this, the checksums do not match (and therefore previously the complete file would have been included as a result).
Several parts of the code request resources to be loaded/built, and these separate threads were tripping over each other and causing build failures. This has been avoided by making sure the resource isn't already building before requesting it.
Requesting a resource with the identifier "default" now maps to a blank string. This allows the /arbitrary/{service}/{name}/{identifier} endpoints to be used for default resources too, as they previously didn't support a blank string as the third parameter.
The main differences are:
- Compute nonce instead of specifying a transaction fee
- Add blank/empty values for all the additional fields, as they are unused for auto updates
Originally I had hoped to do this by using an intermediate iframe, to keep the message handlers separate from the user content, but this created CORS issues of its own. This approach is far simpler.
Now that we require API key authentication - and therefore security is greatly improved - many users will want to bypass the whitelist in order for the UI to communicate with their remote node. This gives an easy way to do this, without having to override the default whitelist. This boolean can now optionally be added to the default settings.json that is published with new releases, without removing the code's ability to update default whitelist values.
This should fix issue where it would take up to 30 seconds to return for a recent block, and would consume masses of CPU due to having to base58 encode the online accounts signatures. Base58 is very slow and made this API endpoint almost unusable for recent blocks, due to them having untrimmed online accounts signatures.
The procedure outlined in commit f4b06fb is now incorrect. Updated procedure:
- A node can opt into relay mode via the "relayModeEnabled":true setting.
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves.
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded. Currently, only the first response is forwarded, but we may later decide to forward all responses.
- As well as forwarding, the relay peer keeps track of the peers that report to be holding hashes (these mappings are held for 30 seconds).
- The originally requesting peer can then make a request to the relay peer for the data file(s).
- The relay peer uses the mapping to forward the request on to another peer, and then forwards the response (i.e. the data file) back to the peer that originally requested the file.
It's best that the source peer's address isn't exposed to the requesting peer. The relay peer can keep track of this mapping itself.
The only real issue with this approach is that we can't use data from ArbitraryDataFileListMessage to update our ArbitraryPeers data, because we can't distinguish between relay peers and hosting peers. But this isn't something we currently do anyway, as we have the ARBITRARY_SIGNATURES message type to take care of updating ArbitraryPeers mappings.
Setting this to false prevents new connections being made to peers that report to have the data that is needed. This is likely only useful for testing, as disabling it in production would reduce the success rate of data retrieval.
This reuses most of the code already in place in the core related to forwarding.
- A node can opt into relay mode via the "relayModeEnabled": true setting
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded
- The original peer can then make a request for the data file(s) themselves using a similar approach, specifying the IP address of the ultimate peer so that the relay node knows who to ask. This part is not implemented yet.
This makes them extremely generic, improves filenames, and makes it easier to create custom lists. It doesn't have backwards support, but the lists feature isn't working properly in core 2.1+ anyway.
It turns out that when you call SLEEP_UNTIL_MESSAGE, the AT resumes from that very same line on the next execution. The original code incorrectly assumed that it would execute from the restart position (SET_PCS).
So sleeping can be thought of as pausing one execution half way through, rather than ending it.
This caused a bug, because once the AT receives a transaction it wakes up and resumes from the SLEEP_UNTIL_MESSAGE line, which is after the refund check. Even when it loops back around again it lands on labelRedeemTxnLoop = codeByteBuffer.position(); which is again after the refund check.
For now, the simplest fix is to only sleep when listed. We could have alternatively moved the SLEEP_UNTIL_MESSAGE above GET_BLOCK_TIMESTAMP, but this would still require users to send a random transaction to the AT to trigger the refund. Given that the ATs are only "alive" for 30 minutes once the trade begins, it's simpler to just execute every block and therefore allow the refunds to happen automatically.
Also modified the directory structure of single file resources to make them consistent with multi file resources.
For multi file resources, the original folder is renamed to "data", resulting in a layout such as:
data/file1.txt
data/file2.txt
data/dir1/file3.txt
For single file resources, the file is now moved into a "data" folder, like so:
data/file.txt
This is slightly unconventional, but is appropriate within the context of QDN to keep everything consistent.
This adds support for "unconfirmable" data uploads, which will be useful for Q-Chat. It also handles cases where a transaction is orphaned and then subsequently becomes invalid.
A website must contain one of the following files in its root directory to be considered valid:
index.html
index.htm
default.html
default.htm
home.html
home.htm
This is the first page that is loaded when loading a Qortal-hosted website.
This would happen if a name fills their limit, and then additional names are followed. Alternatively it could happen if the total storage capacity reduces due to disk space being used by other apps. Chunks are deleted at random to reduce the chance of the same chunk being deleted everywhere. Data loss is possible here for transactions that don't have many peers. We'll have to see in practice how much of a problem this is, but it's better than the scenario where one content creator consumes all space on their followers' nodes, leaving no space for other names that are subsequently followed.
This is calculated by the total capacity divided by the number of names the node follows. The idea here is that a single content creator can't upload terabytes of data and consume all the space on their followers' nodes. They can only use a proportion, with equal space given to each followed name. And since the limit is dynamic, following more names reduces the allocation to existing names.
This discourages an incorrect file size being included with a transaction, as the system will reject it and won't even serve it to other peers.
FUTURE: we could introduce some kind of blacklist to track invalid files like this, and avoid repeated attempts to retrieve them. It is okay for now as the system will backoff after a few attempts.
This API call could get quite heavy when large amounts of files are hosted, but it's preferable to maintaining a list in the database. Ideally we need to keep the database generic so that it can be bootstrapped without interfering with the state. We can always add caching and rate limiting if needed.
Chunk hashes are now stored off chain in a metadata file. The metadata file's hash is then included in the transaction.
The main benefits of this approach are:
1. We no longer need to limit the total file size, because adding more chunks doesn't increase the transaction size.
2. This increases the chain capacity by a huge amount - a 512MB file would have previously increased the transaction size by 16kB, whereas it now requires only an additional 32 bytes.
3. We no longer need to use variable difficulty; every transaction is the same size and so the difficulty can be constant no matter how large the files are.
4. Additional metadata (such as title, description, and tags) can ultimately be stored in the metadata file, as apposed to using a separate transaction & resource.
5. There is also scope for adding hashes of individual files into the metadata file, if we ever wanted to allow single files to be requested without having to download and build the entire resource. Although this is unlikely to be available in the short term.
The only real negative is that we now how to fetch the metadata file before we know anything about the chunks for a transaction. This seems to be quite a small trade off by comparison.
Since we're not live yet, there is no backwards support for on-chain hashes, so a new data testchain will be required. This hasn't been tested outside of unit tests yet, so there will likely be several fixes needed before it is stable.
It's not good to be moving files around in a method that should really be read only. This also adds an intentional checkAndRelocateMiscFiles() call rather than relying on a call to isDataLocal() which may be removed at any time.
This would have been caught by the max differences check anyway, but it's a good check to have in place in case we recalibrate or remove the differences check in the future.
Could be improved in the future to return different codes depending on its status (e.g. doesn't exist = 404, 102 for loading, 500 for error, etc), but 404 makes the most sense until that has been developed
- If an identifier parameter is missing or empty, it will return an unfiltered list of all possible identifiers.
- If an identifier is specified, only resources with a matching identifier will be returned.
- If default is set to true, only resources without identifiers will be returned.
Files are now keyed by signature, in the format:
data/si/gn/signature/hash
For times when there is no signature available (i.e. at the time of initial upload), files are keyed by hash, in the format:
data/_misc/ha/sh/hash
Files in the _misc folder are subsequently relocated to a path that is keyed by the resulting signature.
The end result is that chunks are now grouped on the filesystem by signature. This allows more transparency as to what is being hosted, and will also help simplify the reporting and management of local files.
This should allow for a relatively even distribution of chunks, but there is a (currently unavoidable) risk of files with very few mirrors being deleted altogether.
Longer term this could be improved by checking that one of our peers has a file, before it's deleted locally
Nodes will stop proactively storing new data when they reach 90% capacity.
A new "maxStorageCapacity" setting has been added to allow the user to optionally limit the allocated space for this node. Limits are approximate only, not exact.
publicDataEnabled - whether to store decryptable data (default true)
privateDataEnabled - whether to store data without a decryption key (default false)
This doesn't affect minting rewards; it is simply a means of reducing block candidates. There should be no noticeable difference other than hopefully less re-orgs. We can ultimately do a hard fork and increase Blockchain.minAccountLevelToMint but this allows us to test the approach in a lower risk way.
The missing data check was triggering decryptions, extractions, etc. Replaced with some code which checks for the presence of chunks on the local machine, without getting involved with any build process overhead.
- "NOT_STARTED" is now "DOWNLOADED"
- "DOWNLOADING" is now "MISSING_DATA"
- Removed "DOWNLOAD_FAILED"
Some of these could be reintroduced once the system is able to support them.
These can be used to check the current status of a resource. The different statuses are:
NOT_STARTED,
DOWNLOADING
DOWNLOADED
BUILDING
READY
DOWNLOAD_FAILED
BUILD_FAILED
UNSUPPORTED
Not all statuses are returned yet. The build process needs more functionality to be able to support DOWNLOADED and DOWNLOAD_FAILED. Also, BUILDING and BUILD_FAILED are currently unable to distinguish between different resources with the same registered name, so need some attention.
Each service supports basic validation params, plus has the option for an entirely custom validation function.
Initial validation settings:
- IMAGE must be less than 10MiB
- THUMBNAIL must be less than 500KiB
- METADATA must be less than 10KiB and must contain JSON keys "title", "description", and "tags"
This is needed to avoid triggering a CORS preflight (which occurs when using an X-API-KEY header). The core isn't currently capable of responding to a preflight and the UI therefore blocks the entire request. See: https://stackoverflow.com/a/43881141
This allows users to set only their data path, and for the temp folder to automatically follow it. The temp folder can be moved to a custom location by setting the "tempDataPath" setting.
An API key is now _required_ for sensitive API calls that would previously have allowed local loopback authentication.
Previously, a request would have been considered authenticated if it originated from the same machine, however this creates a security issue when running third party code (particularly javascript) via the data network.
The solution is to now require an API key to authenticate sensitive API calls no matter where the request originates from.
It works as follows:
- When the core is first installed, it has no API key generated and will block sensitive calls until generated.
- A new POST /admin/apikey/generate API endpoint has been added, which can be used the generate an API key for a newly installed node. The UI will ultimately call this automatically.
- This API returns the generated key so that it can be stored by the requesting app (most likely the UI).
- From then on, the generate API requires authentication via the existing API key in order to regenerate a key. It can be used as a security measure if the existing key is compromised.
- The API key must be passed to all sensitive API endpoints from then on, even when calling it from the same local machine.
- If the core already has a legacy API key specified via the 'apiKey' setting, this will be automatically copied to the new format so that a new one doesn't need to be generated.
- The API key itself is stored in a flat file in the qortal directory (the path can be customized using the `apiKeyPath` setting). Deleting this file and restarting the core will allow a new one to be regenerated.
The process of serving resources to a browser will likely be needed for more than just websites (e.g. it will be needed for apps too) so it makes sense to abstract it to its own class.
This should help keep the peer lookup table size down, as there is no need to locate files for transactions that existed before the most recent PUT transaction.
Built resources are deleted when either:
- The resource reaches the expiry interval specified in the builtDataExpiryInterval setting (default 30 days)
- The resource is published by a name that is in the local blacklist
Resources only exist in the reader cache once they have been viewed, to remove the loading time on subsequent views. But some may prefer to reduce this expiry time (at the expense of longer load times and more CPU), as data is held unencrypted in the cache.
We still allow it to be fetched even if it's outside of the storage policy, as the cleanup manager will delete the files very soon after, and they won't be allowed to be served to other peers due to other checks already in place.
This allows other peers to find out where they can obtain these files if we were to stop hosting them later. Or even if we continue hosting copies, it still informs the network on other locations, for better decentralization.
We don't want the network being spammed when a file isn't available by any reachable peers. This feature ensures retries are spaced out over longer timeframes. Basic logic:
- Wait 5 minutes in between failed attempts
- After 5 failed attempts (i.e. 25 mins) only try once per day from then on
- A core restart resets the counters
The stats gathered here can also be used to inform the core of when it should attempt a direct connection with a peer to obtain the data. That part isn't implemented yet.
This allows for custom list creation without the need for creating API endpoints to go along with it. This should save time now that we are using lists more.
- "APP" will allow for user-created apps and the Qortal app store
- "METADATA" will be used to supply info about apps/websites/resources, such as title, description, tags, etc
When using POST /arbitrary/{service}/{name}... it will now automatically decide which method to use (PUT/PATCH) based on a few factors:
- If there are already 10 or more layers, use PUT to reset back to a single layer
- If the next layer's patch is more than 20% of the total resource file size, use PUT
- If the next layer modifies more than 50% of the total file count, use PUT
- Otherwise, use PATCH
The PUT method causes a new base layer to be created and all previous update history for that resource becomes obsolete. The PATCH method adds a small delta layer on top of the existing layer(s).
The idea is to wipe the slate clean with a new base layer once the patches start to get demanding for the network to apply. Nodes which view the content will ultimately have build timeouts to prevent someone from deploying a resource with hundreds of complex layers for example, so this approach is there to maximize the chances of the resource being buildable.
The constants above (10 layers, 20% total size, 50% file count) will most likely need tweaking once we have some real world data.
We may choose to save on CPU by not compressing individual files, so this allows the network to support that. However it is still using compression by default, to reduce file sizes.
This process could potentially be simplified if we were to modify the structure of the actual zipped data (on the writer side), but this approach is more of a "catch-all" (on the reader side) to support multiple different zip structures, giving us more flexibility. We can still choose to modify the written zip structure if we choose to, which would then cause most of this new code to be skipped.
Note: the filename of a single file is not currently retained; it is renamed to "data" as part of the packaging process. Need to decide if this is okay before we go live.
Thumbnails will be used in order to show logos/screenshots in the list of websites or other resources. Playlists will allow for media apps to group videos/audio/images into collections, e.g. albums.
Until now we have been limited to one data resource per name/service combination. This meant that each name could only have a single website, git repo, image, video, etc, and adding another would overwrite the previous data. The identifier property now allows an optional string to be supplied with each resource, therefore allowing an unlimited amount of resources per name/service combination.
Some examples of what this will allow us to do:
- Create a video library app which holds multiple videos per name
- Same as above but for photos
- Store multiple images against each name, such as an avatar, website thumbnails, video thumbnails, etc. This will be necessary for many "system level" features.
- Attach multiple websites to each name. The default website (with blank/null identifier) would remain the entry point, but other websites could be hosted essentially as subdomains, and then linked from the default site. This also provides a means to go beyond the 500MB website size limit.
Not all of these features will exist initially, but having this identifier included in the protocol layer allows them to be added at any time.
This is generated whenever a data resource cannot be built because it is missing data for at least one layer. Using a custom exception type here enables a few new features:
1. A single build process is now able to request missing data from all the layers that need it. Previously it would only request from the first missing layer and would then give up. This resulted in the user/application having to issue the build command multiple times rather than just once, until all layers had been requested.
2. GET /arbitrary/{service}/{name} will now block the response and retry in the background until the data arrives. This allows it to be used synchronously. Note: we'll need to add a timeout.
3. Loading a website via GET /site/{name} will avoid adding to the failed builds queue when a MissingDataException is thrown, which allows it to be quickly retried. The interface already auto refreshes, allowing the site to load as soon as it's available.
We may need to temporarily hold files for the purpose of viewing, but restrictions need to be in place to avoid these being served to peers of stored for longer than they are needed.
- If storage policy includes "FOLLOWING", only process transactions relating to the followed names.
- If storage policy is "ALL", process all transactions.
- If storage policy is "NONE" or "VIEWED", don't process or prefetch any data.
This is will be used to coordinate all build processes and threads. This way it keeps it separate from the ArbitraryDataManager class, which was getting a bit cluttered.
This causes the build to fail on the first pass due to missing chunks, however it now fails with a message indicating that it should be retried shortly. The website loader is already set up in such a way that it will be automatically retried, during which time the loading screen is shown.
Also added code to remove the resource from the "failed builds list" once the chunks arrive, so that it is able to be rebuilt sooner than the FAILURE_TIMEOUT (currently 5 minutes).
- Don't attempt to fetch data for transactions which fall outside of the storage policy
- Delete files relating to transactions that are no longer within the scope of the storage policy
Note: some additional work needs to be done to ensure that viewed files are deleted when using a storage policy that excludes "VIEWED" content.
This means that no additional structural code is required to add new lists. The only non-generic aspect are the API endpoints - it's best to keep these specific until we have a need for user-created lists.
This means that no additional structural code is required to add new lists. The only non-generic aspect are the API endpoints - it's best to keep these specific until we have a need for user-created lists.
There's no real need to maintain support for signature mapping anymore. Using this new method means that the latest version of a site is always served via the traditional domain name, whereas using transaction signatures caused older versions to be shown.
Example settings.json configuration:
"domainMapServiceEnabled": true,
"domainMapServicePort": 80,
"domainMap": [
{
"domain": "webdemo.qortal.uk",
"name": "QortalDemo"
},
{
"domain": "www.reqorder.org",
"name": "ReQorder"
}
]
This maps ARBITRARY transactions to peer addresses, but also includes additional metadata/stats to track the success rate and reachability.
Once a node receives files for a transaction, it broadcasts this info to its peers so they can update their records.
TLDR: this allows us to locate peers that are hosting a copy of the file we need.
This ensures that only the owner of a name is able to update data associated with that name.
Note that this doesn't take into account the ability for group members to update a resource, so this will need modifying when that feature is ultimately introduced (likely after v3.0)
Note that this is unlikely to be the cause of some of the zero timestamps issue seen on a subset of nodes - there is still likely to be another problem that needs fixing.
We may not need to validate this at all now that we have the ability to validate the current layer, but I'll leave it as it could be useful for debugging. It is disabled by default so not an issue.
- The "diff type" is now specified per file, allowing for different diff methods in each modified file.
- Patches will only be created when both the before and after files are less than 100kiB in size.
- Patches are validated after creation, and if invalid it will fall back to including the entire file.
This has identified a bug where patching fails for files without trailing newline characters, which still needs to be fixed. Until then, it will fall back to including the entire file in these cases.
This limits the amount of additional space needed to the size of the compressed bootstrap (currently just under 4GB for full nodes, or 200MB for top-only nodes).
This allows the topOnly setting to be disabled and the node will automatically bootstrap to the archive version. A rebuild isn't attempted if bootstrapping is disabled, in order to reduce risk.
This should fix a longstanding issue where quitting the core before the first checkpoint (1-2 hours after first launch) causes the database to become corrupt.
Pruning is still a concept in the code, but since it relates to both archived and topOnly mode, it makes it cleaner to use "topOnly" to refer to the pruned db with no archive.
It will now attempt to wait until there are no other active transactions before starting, to avoid deadlocks. A timeout for this process is specified - generally 60 seconds - so that callers can give up or retry if something is holding a transaction open for too long. Right now we will give up in all places except for bootstrap creation, where it will keep retrying until successful.
This involved adding a feature to the test suite in include the option of using a repository located on disk rather than in memory. Also moved the bootstrap compression/extraction working directories to temporary folders.
In practice, the reading from a correctly archived chain with 550k blocks is currently around 99.5%, but it will be lower if starting with a chain that isn't fully synced.
This is extremely slow and shouldn't be needed in normal use cases. It currently checks that each block references the one before, but can ultimately be expanded to check more information about each block and its derived data.
- Adds support for minting accounts as well as trade bot states
- Includes automatic import of both types on node startup, and automatic export on node shutdown
- Retains legacy trade bot states in a separate "TradeBotStatesArchive.json" file, whilst keeping the current active ones in "TradeBotStates.json". This prevents states being re-imported after they have been removed, but still keeps a copy of the data in case a key is ever needed.
- Uses indentation in the JSON files for easier readability.
This was causing very recent AT states to be deleted accidentally, because we weren't rebuilding the LatestATStates table before running the query. We should add unit tests to cover this process in case there are any other undiscovered problems.
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.
As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.
HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.
Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.
The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.
A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.
There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.
The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.
Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.
From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.
Nodes in pruning mode will be unable to serve older blocks to peers.
Whilst we would ultimately like to drop these to 24 hours only, for now we need some headroom to allow for orphaning in the event of a problem. Orphaning currently fails if there is no ATStatesData available (which is the case for trimmed blocks). This could ultimately be solved by retaining older unique states, which is essentially what the sleeping AT feature will do.
onlineAccountSignaturesMinLifetime reduced from 720 hours to 12 hours
onlineAccountSignaturesMaxLifetime reduced from 888 hours to 24 hours
These were using up too much space in the database and so it makes sense to trim them more aggressively (assuming testing goes well). We will now stop validating online account signatures after 12 hours, which should be more than enough confirmations, and we will discard them after 24 hours.
Note: this will create some complexity once some of the network is running this code. It could cause out-of-sync nodes on old versions to start treating blocks as invalid from updated peers. It's likely not worth the complexity of a hard fork though, given that almost all nodes will be synced to the chain tip and will therefore be unaffected. And even with a hard fork, we'd still face this problem on out of date nodes.
Problem:
The "Names" table (the latest state of each name) drifts out of sync with the name-related transaction history on a subset of nodes for some unknown and seemingly difficult to find reason.
Solution:
Treat the "Names" table as a cache that can be rebuilt at any time. It now works like this:
- On node startup, rebuild the entire Names table by replaying the transaction history of all registered names. Includes registrations, updates, buys and sells.
- Add a "pre-process" stage to block/transaction processing. If the block contains a name related transaction, rebuild the Names cache for any names referenced by these transactions before validating anything.
The existing "integrity check" has been modified to just check basic attributes based on the latest transaction for a name. It will log if there are any inconsistencies found, but won't correct anything. This adds confidence that the rebuild has worked correctly.
There are also multiple unit tests to ensure that the rebuilds are coping with various different scenarios.
This ensures that all name-related transactions have resulted in correct entries in the Names table. A bug in the code has resulted in some nodes having missing data in their Names table. If this process finds a missing name, it will log it and add the name.
Missing names are added, but ownership issues are only logged. The known bug wasn't related to ownership, so the logging is only to alert us to any issues that may arise in the future.
In hindsight, the code could be rewritten to store all three transaction types in a single list, but this current approach has had a lot of testing, so it is best to stick with it for now.
This is necessary because it's possible (in theory) for a block to be considered invalid due to an internal failure such as an SQLException. This gives them more chances to be considered valid again. 1 hour is more than enough time for the node to find an alternate valid chain if there is one available.
This prevents a valid block candidate being discarded in favour of an invalid one. We can't actually validate a block before orphaning (because it will fail due to various reasons such as already existing transactions, an existing block with the same height, etc) so we will instead just check the signature against the list of known invalid blocks.
This should only be used if all of the following conditions are true:
a) Your node is private and not shared with others
b) Port 12391 (API port) isn't forwarded
c) You have granted access to specific IP addresses using the "apiWhitelist" setting
The node will warn on startup if this setting is used without a sensible access control whitelist.
Until now, a high weight invalid block can cause other valid, lower weight alternatives to be discarded. The solution to this problem is to track invalid blocks and quickly avoid them once discovered. This gives other valid alternative blocks the opportunity to become part of a valid chain, where they would otherwise have been discarded.
As with the block minter update, this will cause a fork when the highest weight block candidate is invalid. But it is likely that the fork would be short lived, assuming that the majority of nodes pick the valid chain.
If it has been more than 10 minutes since receiving the last valid block, but we have had at least one invalid block since then, this is indicative of a stuck chain due to no valid block candidates. In this case, we want to allow the block minter to mint an alternative candidate so that the chain can continue.
This would create a fork at the point of the invalid block, in which two chains (valid an invalid) would diverge. The valid chain could never rejoin the invalid one, however it's likely that the invalid chain would be discarded in favour of the valid one shortly after, on the assumption that the majority of nodes would have picked the valid one.
- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
For now, we need some headroom to allow for orphaning in the event of a problem. Orphaning currently fails if there is no ATStatesData available (which is the case for trimmed blocks). This could ultimately be solved by retaining older unique states.
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.
As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.
HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.
Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.
The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.
A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.
There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
onlineAccountSignaturesMinLifetime reduced from 720 hours to 12 hours
onlineAccountSignaturesMaxLifetime reduced from 888 hours to 24 hours
These were using up too much space in the database and so it makes sense to trim them more aggressively (assuming testing goes well). We will now stop validating online account signatures after 12 hours, which should be more than enough confirmations, and we will discard them after 24 hours.
Note: this will create some complexity once some of the network is running this code. It could cause out-of-sync nodes on old versions to start treating blocks as invalid from updated peers. It's likely not worth the complexity of a hard fork though, given that almost all nodes will be synced to the chain tip and will therefore be unaffected. And even with a hard fork, we'd still face this problem on out of date nodes.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.
The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.
Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.
From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.
Nodes in pruning mode will be unable to serve older blocks to peers.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.
It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
This improves scalability but isn't sufficient for a long term solution. TODO: It probably makes sense to add an additional query for recent transactions only, so that they are fetched quickly.
This is needed because we want to allow brand new accounts to publish data without a fee. A similar approach to CrossChainResource.buildAtMessage(). We already require PoW on all arbitrary transactions, so no additional logic beyond this should be needed.
This adds the loadAsynchronously() method to ArbitraryDataReader, in addition to the existing loadSynchronously() method.
When requesting a website in a browser, previously the building of the resource's layers would be done synchronously in the API handler. This understandably caused many issues, so the building is now done asynchronously by a dedicated thread. A loading screen is shown in its place which auto refreshes every second until the build has completed.
It's possible that this concept will struggle in the real world if operating systems, virus scanners, etc start interfering with our file stucture. Right now it is using a zero tolerance approach when checking the validity of each layer. We may choose to loosen this slightly if we encounter problems, e.g. by excluding hidden files. But for now it is best to be as strict as possible.
This decides whether to build a new state or use an existing cached state when serving a data resource. It will cache a built resource until a new transaction (i.e. layer) arrives. This drastically reduces load, and still allows for almost instant propagation of new layers.
This is used to store the transaction signature and build timestamp for each built data resource. It involved a refactor of the ArbitraryDataMetadata class to introduce a subclass for each file ("patch" and "cache"). This allows more files to be easily added later.
This defends against a missing or out-of-order transaction. If this ever fails validation, we may need to rethink the way we are requesting transactions. But in theory this shouldn't happen, given that the "last reference" field of a transaction ensures that out-of-order transactions are invalid already.
This bug was introduced now that the temp directory is contained within the data directory. Without this, it would leave it in the temp folder and then fail at a later stage.
This ensures that the temporary files are being kept with the rest of the data, rather than somewhere inappropriate such as on flash storage. It also allows the user to locate them somewhere else, such as on a dedicated drive.
This adds support for the PATCH method in addition to the existing PUT method.
Currently, a patch includes only files that have been added or modified, as well as placeholder files to indicate those that have been removed.
This is not production ready, as I am hoping to create patches on a more granular level - i.e. just the modified bytes of each file. It would also make sense to track deletions using a metadata/manifest file in a hidden folder.
It also adds early support of accessing files using a name rather than a signature or hash.
Now only skipping the HTLC redemption if the AT is finished and the balance has been redeemed by the buyer. This allows HTLCs to be refunded for ATs that have been refunded or cancelled.
Previously, if an error was returned from an Electrum server (such as "server busy") it would throw a NetworkException that would be caught outside of the server loop and cause the entire request to fail.
Instead of throwing an exception, I am now logging the error and returning null, in the same way we do for IOException and NoSuchElementException further up in the same method.
This allows the caller - most likely connectedRpc() - to move on to the next server in the list and try again.
This should fix an issue seen where a "server busy" response from a single server was essentially breaking our implementation, as we would give up altogether instead of trying another server.
This is a workaround for an UnsupportedOperationException thrown when using X2Go, due to PERPIXEL_TRANSLUCENT translucency being unsupported in splashDialog.setBackground(). We could choose to use a different version of the splash screen with an opaque background in these cases, but it is low priority.
Updated the "localeLang" files with new keys and removed old unused keys for English, German, Dutch, Italian, Finnish, Hungarian, Russian and Chinese translations
These are the same as the /lists/blacklist/address/{address} endpoints but allow a JSON array of addresses to be specified in the request body. They currently return true if
The ResourceList class creates or updates a list for the purpose of tracking resources on the Qortal network. This can be used for local blocking, or even for curating and sharing content lists. Lists are backed off to JSON files (in the lists folder) to ease sharing between nodes and users.
This first implementation allows access to an address blacklist only, but has been written in such a way that other lists can be easily added. This might be needed in the future, e.g. to blacklist a group, a poll, or some hosted data. It could also be used by community members to curate lists of favourite or problematic content, which could then be shared or even subscribed to on the chain by other users.
The inputs and outputs contain a simpler version than the ones in the raw transaction, consisting of `address`, `amount`, and `addressInWallet`. The latter of the three is to know whether the address is one that is derived from the supplied xpub master public key.
The previous criteria was to stop searching for more leaf keys as soon as we found a batch of keys with no transactions, but it seems that there are occasions when subsequent batches do actually contain transactions. The solution/workaround is to require 5 consecutive empty batches before giving up. There may be ways to improve this further by copying approaches from other BIP32 implementations, but this is a quick fix that should solve the problem for now.
This involved a small refactor of the ACCT code to expose findSecretA() in a more generic way. Bitcoin is disabled for refunding and redeeming as it uses a legacy approach that we no longer support. The {blockchain} URL parameter has also been removed from the redeem and refund APIs, because it can be obtained from the ACCT via the code hash in the AT.
The "dust" threshold is around 1 DOGE - meaning orders below this size cannot be refunded or redeemed. The simplest solution is to prevent orders of this size being placed to begin with.
Recently we have stopped including the version number in the zip and exe files uploaded to github, as this allows us to use the "https://github.com/Qortal/*/releases/latest/download/*" redirect for all 3 files when linking from the qortal.org website. Previously, it could only be done for the JAR since this was the only file that didn't contain a version number. This avoids having to update the website every time we distribute a new release.
Note that this currently requires that the Qortal-x.y.z.exe file created by AdvancedInstaller is renamed to qortal.exe before running ./build-release.sh. If you forget to rename, the script will exit with a warning that the file couldn't be found.
This ensures that nodes are storing unreadable files, outside of the context of Qortal. For public data, the decryption keys themselves are on-chain, included in the "secret" field of arbitrary transactions. When we introduce the concept of private data, we can simply exclude the secret key from the transaction so that only the owner can decrypt it.
When encrypting the file, I have added the 16 byte initialization vector as a prefix to the cyphertext, and it is then automatically extracted back out when decrypting. This gives us the option to encrypt more than one file with the same key, if we ever need it. Right now, we are using a unique key per file, so it's not actually needed, but it's good to have support.
Adds "name", "method", "secret", and "compression" properties. These are the foundations needed in order to handle updates, encryption, and name registration. Compression has been added so that we have the option of switching to different algorithms whilst maintaining support for existing transactions.
These combine some Qora services (SERVICE_NAME_STORAGE, SERVICE_BLOG_POST, and SERVICE_BLOG_COMMENT) with existing Qortal services (SERVICE_AUTO_UPDATE), and some new additions (SERVICE_ARBITRARY_DATA, SERVICE_WEBSITE, and SERVICE_GIT_REPOSITORY)
Previously we would ask all connected peers for the file itself, but this caused the network to be swamped when multiple peers responded with the same file.
This new approach instead asks all connected peers to send back a list of hashes for all files they have relating to a transaction signature. The requesting node then uses these lists to make separate requests for each missing file.
This is a quick solution to rebuild directory structures with missing files. This whole area of the code needs some reworking, as serving the site from a temporary folder is not a robust long term solution.
Domain names can be mapped to arbitrary transaction signatures via the node's settings, and then served over port 80 or 443. This allows Qortal hosted sites to be accessible via a traditional domain name.
Example configuration to map two domains:
"domainMapServiceEnabled": true,
"domainMapServicePort": 80,
"domainMap": [
{
"domain": "example.com",
"signature": "tEsw4kUn4ZJfPha7CotUL6BHkFPs79BwKXdY6yrf28YTpDn4KSY6ZKX3nwZCkqDF9RyXbgaVnB1rTEExY3h9CQA"
},
{
"domain": "demo.qortal.org",
"signature": "ZdBWWPMhR7AZwSu5xZm89mQEacekqkNfAimSCqFP6rQGKaGnXR9G4SWYpY5awFGfhmNBWzvRnXkWZKCsj6EMgc8"
}
]
Each domain needs to be pointed to the Qortal data node via an A record or CNAME. You can add redundant nodes by adding multiple A records for the same domain (this is known as DNS Failover).
Note that running a webserver on port 80 (or anything less than 1024) requires running the data node as root. There are workarounds to this, such as disabling privileged ports, or using a reverse proxy. I will investigate this more as time goes on, but this is okay for a proof of concept.
It's now capable of syncing chunks as well as complete files. This isn't production ready as it currently requests/receives the same file from multiple peers at once, which slows down the sync and wastes lots of bandwidth. Ideally we would find an appropriate peer first and then sync the file from them.
This introduces the hash58 property, which stores the base58 hash of the file passed in at initialization. It leaves digest() and digest58() for when we need to compute a new hash from the file itself.
Until now it wasn't possible to set up a chain with zero transaction fees due to a hardcoded zero check in Payment.isValid(), and a divide by zero error in Transaction.hasMinimumFeePerByte()
- Adds support for files up 500MiB per transaction (at 2MiB chunk sizes). Previously, the max data size was 4000 bytes.
- Adds a nonce, giving us the option to remove the transaction fees altogether on the data chain.
These features become enabled in version 5 of arbitrary transactions.
This is probably our number one reliability issue at the moment, and has been a problem for a very long time.
The existing CHECKPOINT_LOCK would prevent new connections being created when we are checkpointing or about to checkpoint. However, in many cases we obtain the db connection early on and then don't perform any queries until later. An example would be in synchronization, where the connection is obtained at the start of the process and then retained throughout the sync round. My suspicion is that we were encountering this series of events:
1. Open connection to database
2. Call maybeCheckpoint() and confirm there are no active transactions
3. An existing connection starts a new transaction
4. Checkpointing is performed, but deadlocks due to the in-progress transaction
This potential fix includes preparedStatement.execute() in the CHECKPOINT_LOCK, to block any new transactions being started when we are locked for checkpointing. It is fairly high risk so we need to build some confidence in this before releasing it.
This is probably the most efficient way to process the data on the fly, but it's still not very scalable. A better approach would be to pre-process the HTML when building the file structure, and then serve them completely statically (i.e. using a standard webserver rather than via application memory). But it makes sense to keep it this way for development and maybe early beta testing.
Rename to zh_SC for better distinguish between zh_SC (Simple Chinese)and zh_TC(Traditional Chinese)
Rephrase some of the words for better understanding.
This can be used to preview a site before signing a transaction and announcing it to the network. The response will need reworking to return JSON (along with most of the other new APIs)
This fixes an NPE when trying to send a file that doesn't exist. It also removes the caching, which we can add again later if it turns out to be needed.
Now that we aren't disconnecting mid sync, we can get away with more frequent disconnections. This brings the average connection length to around 9 mins.
Connection limits are defined in settings (denominated in seconds):
"minPeerConnectionTime": 120,
"maxPeerConnectionTime": 3600
Peers will disconnect after a randomly chosen amount of time between the min and the max. The default range is 2 minutes to 1 hour, as above.
This encourages nodes to connect to a wider range of peers across the course of each day, rather than staying connected to an "island" of peers for an extended period of time. Hopefully this will reduce the amount of parallel chains that can form due to permanently connected clusters of peers.
We may find that we need to reduce the defaults to get optimal results, however it is best to do this incrementally, with the option for reducing further via each node's settings. Being too aggressive here could cause some of the earlier problems (e.g. 20% missing blocks minted) to reappear. We can re-evaluate this in the next version. Note that if defaults are reduced significantly, we may need to add code to prevent this from happening mid-sync. With higher defaults, this is less of an issue.
Thanks to @szisti for supplying some base code for this commit, and also to @CWDSYSTEMS for diagnosing the original problem.
This indicates the size of the re-org/rollback that was required in order to perform this sync operation. It is only included if it's greater than 0 blocks.
This deletes a file referenced by a user supplied SHA256 digest string (which we will use as the file's "ID" in the Qortal data system). In the future this could be extended to delete all associated chunks, but first we need to build out the data chain so we have a way to look up chunks associated with a file hash.
We must be careful not to add files to the resources folder accidentally, given that a bundled log4j2.properties file is used in preference to the user's copy. By keeping this out of gitignore, it becomes more obvious if a file is added, and it can then be caught and removed before a release.
There were necessary for these scripts to function in my build environment (Mac OSX). This may give errors when running in other environments, but we can deal with that in future, when others need to use these scripts.
Including an older JAR in the source code only leads to confusion, because a zip of the source code is automatically included with each github release. From what I can see, there is no need for it to be here. Added to .gitignore so we have the option of keeping a local copy.
When sending or requesting more than 1000 online accounts, peers would be disconnected with an EOF or connection reset error due to an intentional null response. This response has been removed and it will instead now only send the first 1000 accounts, which prevents the disconnections from occurring.
In theory, these accounts should be in a different order on each node, so the 1000 limit should still result in a fairly even propagation of accounts. However, we may want to consider increasing this limit, to maximise the propagation speed.
Thanks to szisti for tracking this one down.
This loops through all sell orders and attempts to redeem the LTC from each one. It will return true if at least one was redeemed, or false if none are available to be redeemed. Details are logged to the log.txt file rather than returned in the API response.
The previous query was taking almost half a second to run each time, whereas the new version runs 10-100x faster. This was the main bottleneck with block serialization and should therefore allow for much faster syncing once rolled out to the network. Tested several thousand blocks to ensure that the results returned by the new query match those returned by the old one.
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
This is probably more validation than is actually needed, but given that we use the same field for LTC and QORT receiving addresses in the database, it is best to be extra careful.
This returns serialized, base58 encoded data for the entire block. It is the same format as the data sent between nodes when synchronizing, with base58 encoding added so that it can be outputted cleanly in the API response.
This is the equivalent of the refund API but can be used by the seller to redeem LTC from a stuck transaction, by supplying the associated AT address, There are no lockTime requirements; it is redeemable as soon as the buyer has redeemed the QORT and sent the secret to the seller.
This is designed to be called by the buyer, and will force refund their P2SH transaction associated with the supplied AT. The tradebot responsible for this trade must be present in the user's db for this API access the necessary data. It must be called after lockTime has passed, which for LTC is currently 60 minutes from the time that the P2SH was funded. Trying to refund before this time will result in a FOREIGN_BLOCKCHAIN_TOO_SOON error.
This can currently be used by either the buyer or the seller, but it requires the seller's trade private key & receiving address to be specified, along with the buyer's secret. Currently hardcoded to LITECOIN but I will aim to make this generic as we start adding more coins.
This makes them more compatible with the output of the /crosschain/tradebot and /crosschain/trade/{ataddress} APIs which is likely where most people will be retrieving data from, rather than the database itself.
This is similar to the BTC equivalent, but removes secretB as an input parameter. It also signs and broadcasts the transaction, because the wallet isn't needed for this. These transactions have to be signed using the tradePrivateKey from the tradebot data rather than any of the wallet's keys.
There are two other LitecoinACCTv1 APIs still to implement, but I will leave these until they are needed.
This tightens up the decision making by adding two requirements:
1. The peer must return the same number of summaries to the ones requested.
2. The peer must return a summary that matches its latest reported signature.
This ensures we are always making sync decisions based on accurate data, and removes peers that are currently mid re-org. This is probably more validation than is actually necessary, but it's best to be really thorough here so it is as optimized as possible.
We have gone backwards and forwards on this one a lot recently, but now that stability has returned, it is best to tighten this up. Previously it was loosened to help reduce network load, but that is no longer a problem. With this stricter approach, it should prevent a node ending up in an incomplete state after syncing, which is the main cause of the shorter re-orgs we are seeing.
The existing HSQL export/import (PERFORM EXPORT SCRIPT and PERFORM IMPORT SCRIPT) have been replaced with a custom JSON import and export. Whilst this is less generic, it has some significant advantages:
- When exporting data, it is now able to combine the exported data with any data that already exists in the backup file. This prevents a backup after a bootstrap from overwriting data from before the bootstrap, and removes the need for all of the "archive" files that we currently create.
- Adds support for partial imports, and updates. Previously an import would fail if any of the data being imported already existed in the db. It will now add new rows and update existing ones.
- The format and contents of the exported trade bot data now matches the output of the /crosschain/tradebot API.
- Data is retrieved without the need for a database lock, and therefore the export process is much faster and less invasive. This should prevent the lockups and other problems seen when using the trade portal.
For now, there are a couple of trade-offs to using this new approach:
- The minting key import/export has been temporarily removed until there is more time to transition it to this new format.
- Existing .script backups can no longer be imported using versions higher than 1.5.1.
Both of these can be solved by temporarily running version 1.5.1, performing the necessary imports/exports, then returning to the latest version. Longer term the minting keys export/import will be reimplemented using the JSON format.
This controls whether to allow connections with peers below minPeerVersion.
If true, we won't sync with them but they can still sync with us, and will show in the peers list. This is the default, which allows older nodes to continue functioning, but prevents them from interfering with the sync behaviour of updated nodes.
If false, sync will be blocked both ways, and they will not appear in the peers list at all.
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.
Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]
startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
Initially set to 10 when used by the /crosschain/price/{blockchain} API, so that the price is based on the last 10 trades rather than every trade that has ever taken place.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
# Conflicts:
# pom.xml
# src/main/java/org/qortal/controller/Synchronizer.java
Removed all fast sync code from Controller.syncToPeerChain(), so it is now the same as `master`.
This includes updating AdoptOpenJDK to version 11.0.11.9, because 11.0.6.10 is no longer recommended or available in their archive. It also looks like I am using a newer version of AdvancedInstaller itself.
Again, this wouldn't have affected anything in 1.5.0 or before, but it will become more significant if we switch to same-length chain weight comparisons.
This gives an insight into the contents of each chain when doing a re-org. To enable this logging, add the following to log4j2.properties:
logger.block.name = org.qortal.block.Block
logger.block.level = debug
This solves a common problem that is mostly seen when starting a node that has been switched off for some time, or when starting from a bootstrap. In these cases, it can be difficult get synced to the latest if you are starting from a small fork. This is because it required that the node was brought up to date via a single peer, and there wasn't much room for error if it failed to retrieve a block a couple of times. This generally caused the blocks to be thrown away and it would try the same process over and over.
The solution is to apply new blocks if the most recently received block is newer than our current latest block. This gets the node back on to the main fork where it can then sync using the regular applyNewBlocks() method.
If a peer fails to reply with all requested blocks, we will now only apply the blocks we have received so far if at least one of them is recent. This should prevent or greatly reduce the scenario where our chain is taken from a recent to an outdated state due to only partially syncing with a peer. It is best to keep our chain "recent" if possible, as this ensures that the peer selection code always runs, and therefore avoids unnecessarily syncing to a random peer on an inferior chain.
Now that we are spending a lot of time to carefully select a peer to sync with, it makes sense to retry a couple more times before giving up and starting the peer selection process all over again.
In these comparisons it's easy to incorrectly identify a bad chain, as we aren't comparing the same number of blocks. It's quite common for one peer to fail to return all blocks and be marked as an inferior chain, yet we have other "good" peers on that exact same chain. In those cases we would have stopped talking to the good peers again until they received another block.
Instead of complicating the logic and keeping track of the various good chain tip signatures, it is simpler to just remove the inferior peers from this round of syncing, and re-test them in the next round, in case they are in fact superior or equal.
The iterator was removing the peer from the "peersSharingCommonBlock" array, when it should have been removing it from the "peers" array. The result was that the bad peer would end up in the final list of good peers, and we could then sync with it when we shouldn't have.
The existing system was unable to resume without manual intervention if it stalled for more than 7.5 minutes. After this time, no peers would have "recent' blocks, which are prerequisites for synchronization and minting.
This new code monitors for such a situation, and enters "recovery mode" if there are no peers with recent blocks for at least 10 minutes. It also requires that there is at least one connected peer, to reduce false positives due to bad network connectivity.
Once in recovery mode, peers with no recent blocks are added back into the pool of available peers to sync with, and restrictions on minting are lifted. This should allow for peers to collaborate to bring the chain back to a "recent" block height. Once we have a peer with a recent block, the node will exit recovery mode and sync as normal.
Previously, lifting minting restrictions could have increased the risk of extra forks, however it is much less risky now that nodes no longer mint multiple blocks in a row.
In all cases, minBlockchainPeers is used, so a minimum number of connected peers is required for syncing and minting in recovery mode, too.
This could drastically reduce the number of forks being created. Currently, if a node is having problems syncing, it will continue adding to its own fork, which adds confusion to the network. With this new idea, the node would be prevented from adding to its own chain and is instead forced to wait until it has retrieved the next block from the network.
We will need to test this on the testnet very carefully. My worry is that, because all minters submit blocks, it could create a situation where the first block is submitted by everyone, and the second block is submitted by no-one, until a different candidate for the first block has been obtained from a peer. This may not be a problem at all, and could actually improve stability in a huge way, but at the same time it has the potential to introduce serious network problems if we are not careful.
It now has a new parameter - keepArchivedCopy - which when set to true will cause it to rename an existing TradeBotStates.script to TradeBotStates-archive-<timestamp>.script before creating a new backup. This should avoid keys being lost if a new backup is taken after replacing the db.
In a future version we can improve this in such a way that it combines existing and new backups into a single file. This is just a "quick fix" to increase the chances of keys being recoverable after accidentally bootstrapping without a backup.
In version 1.4.6, we would still sync with a peer even if we only received a partial number of the requested blocks/summaries. This could create a new problem, because the BlockMinter would often try and make up the difference by minting a new fork of up to 5 blocks in quick succession. This could have added to network confusion.
Longer term we may want to adjust the BlockMinter code to prevent this from taking place altogether, but in the short term I will revert this change from 1.4.6 until we have a better way.
Added a new step, which attempts to filter out peers that are on inferior chains, by comparing them against each other and our chain. The basic logic is as follows:
1. Take the list of peers that we'd previously have chosen from randomly.
2. Figure out our common block with each of those peers (if its within 240 blocks), using cached data if possible.
3. Remove peers with no common block.
4. Find the earliest common block, and compare all peers with that common block against each other (and against our chain) using the chain weight method. This involves fetching (up to 200) summaries from each peer after the common block, and (up to 200) summaries from our own chain after the common block.
5. If our chain was superior, remove all peers with this common block, then move up to the next common block (in ascending order), and repeat from step 4.
6. If our chain was inferior, remove any peers with lower weights, then remove all peers with higher common blocks.
7. We end up with a reduced list of peers, that should in theory be on superior or equal chains to us. Pick one of those at random and sync to it.
This is a high risk feature - we don't yet know the impact on network load. Nor do we know whether it will cause issues due to prioritising longer chains, since the chain weight algorithm currently prefers them.
The script will fetch a set of blocks and then backtest the specified blockTimings settings (target, deviation, and power) against those real life blocks. This allows configurations to be fine tuned to tighten up block times, and to adjust the timestamp variance between levels.
Usage:
block-timings.sh <startheight> <count> [target] [deviation] [power]
startheight: a block height, preferably within the untrimmed range, to avoid data gaps
count: the number of blocks to request and analyse after the start height. Default: 100
target: the target block time in milliseconds. Originates from blockchain.json. Default: 60000
deviation: the allowed block time deviation in milliseconds. Originates from blockchain.json. Default: 30000
power: used when transforming key distance to a time offset. Originates from blockchain.json. Default: 0.2
Main differences / improvements:
- Only request a single batch of signatures upfront, instead of the entire peer's chain. There is no point in requesting them all, as the later ones may not be valid by the time we have finished requesting all the blocks before them.
- If we fail to fetch a block, clear any queued signatures that are in memory and re-fetch signatures after the last block received. This allows us to cope with peers that re-org whilst we are syncing with them.
- If we can't find any more block signatures, or the peer fails to respond to a block, apply our progress anyway. This should reduce wasted work and network congestion, and helps cope with larger peer re-orgs.
- The retry mechanism remains in place, but instead of fetching the same incorrect block over and over, it will attempt to locate a new block signature each time, as described above. To help reduce code complexity, block signature requests are no longer retried.
Currently set to 1, as serialization of the BlocksMessage data on mainnet is too slow to use this for any significant number of blocks right now. Hopefully we can find a way to optimise this process, which will allow us to use this for multiple block syncing.
Until then, sticking with single blocks should still be enough to help solve the network congestion and re-orgs we are seeing, because it gives us the ability to request the next block based on the previous block's signature, which was unavailable using GET_BLOCK. This removes the requirement to fetch all block signatures upfront, and therefore it shouldn't matter if the peer does a partial re-org whilst a node is syncing to it.
If fast syncing is enabled in the settings (by default it's disabled) AND the peer is running at least v1.5.0, then it will route through to a new method which fetches multiple blocks at a time, in a very similar way to Synchronizer.syncToPeerChain().
If fast syncing is disabled in the settings, or we are communicating with a peer on an older version, it will continue to sync blocks individually.
When communicating with a peer that is running at least version 1.5.0, it will now sync multiple blocks at once in Synchronizer.syncToPeerChain(). This allows us to bypass the single block syncing (and retry mechanism), which has proven to be unviable when there are multiple active forks with several blocks in each chain.
For peers below v1.5.0, the logic should remain unaffected and it will continue to sync blocks individually.
Until now, we required a perfect success rate when syncing with a peer via Synchronizer.syncToPeerChain(). Blocks were requested individually, but the node would give up and lose all progress if a single request failed. In practice, this happened very regularly, and it was difficult to succeed when there were a large number of blocks (e.g. 20+) that needed to be requested.
This commit adds two retry mechanisms, causing each of the two request types (block sigs and blocks) to retry 3 times before giving up, potentially avoiding a lot of wasted work. The number of retries is configurable in the MAXIMUM_RETRIES constant, which we could move to settings at some point if this feature proves useful.
The original issue seemed to result in a few side effects:
1. Nodes would spend a large amount of time requesting blocks from peers, only to throw it all away afterwards. This potentially added to network congestion, as nodes were using unnecessary network time to unproductively serve peers.
2. A large number of sync attempts were failing, particularly when a fork emerged with a significant number of divergent blocks (20+). This issue reduced the ability for nodes to sync to the correct chain while they still had time to do so. With every block that passed, it became made it more and more difficult to switch to the correct chain. Eventually, the correct chain would become TOO_DIVERGENT at which point there is no way to automatically switch without manual intervention. I hope that this retry mechanism will increase the chances of nodes automatically moving onto the right chain quickly, avoiding the need for a user to intervene.
3. The POST /admin/forcesync API was unlikely to succeed when the peer's chain had started to diverge from the user's chain. This should increase the success rate.
Also included in this commit is a MAXIMUM_BLOCK_SIGNATURES_PER_REQUEST constant. This limits the number of block sigs requested in each batch (default 200). Without this, we are unable to increase MAXIMUM_COMMON_DELTA because it can try and request thousands of block sigs at once, which unsurprisingly doesn't succeed.
This bug often prevented the correct amount of block signatures (and blocks) from being requested from a peer, when trying to sync to it.
It could result in quite serious consequences, as it would trigger orphaning back to the common block without first requesting all of the necessary blocks from the peer's chain. Rather than applying a complete copy of the peer's chain, it could orphan back to the common block and then only apply a few blocks beyond that, leaving the node in an unexpected state, potentially hundreds of blocks behind the peer's current height, which it then has to try and obtain from other peers.
When there are forks present, this could result in it hopping from chain to chain, each time being unable to fully synchronise with the peer. Given that we currently discard our chain if it is deemed that our latest block isn't "recent", it is very important that nodes are brought up to the latest block when synchronising with a peer, to avoid constantly triggering discards.
The severity of this bug increased when there was a large disparity between the peer's latest block and the common block height, and prevented us from being able to increase MAXIMUM_COMMON_DELTA.
These are simpler than the level 1+2 tests; they only test that the rewards are correct for each level post-shareBinFix. I don't think we need multiple instances of the pre-shareBinFix or block orphaning tests. There are a few subtle differences between each test, such as the online status of Bob, in order the make the tests slightly more comprehensive.
1. Assign 3 minters (one founder, one level 1, one level 2)
2. Mint a block after the shareBinFix, ensuring that level 1 and 2 are being rewarded evenly from the same share bin.
3. Orphan the block and ensure the rewards are reversed.
4. Orphan two more blocks, each time checking that the balances are being reduced in accordance with the pre-shareBinFix mapping.
As importing a transaction requires blockchain lock, all the network threads
can be used up blocking for that lock, especially if Synchronizer is active.
So we simply discard incoming TRANSACTION messages if we can't immediately
obtain the blockchain lock. Some other peer will probably attempt to
send the transaction soon again anyway.
Plus we swap transaction lists after connection handshake.
Post trigger, account levels will map correctly to share bins, subtracting 1 to account for the 0th element of the shareBinsByLevel array.
Pre-trigger, the legacy mapping will remain in effect.
Post trigger, this change will use all 128 bytes of previous block's signature when
calculating/validating next block's "minter" signature (itself the first 64 bytes of a block signature).
Prior to trigger, current behaviour is to only use first 64 bytes of previous block's
signature, which doesn't encompass transactions signature.
New block sig code should help reduce forking and help improve transactional
security.
Added "newBlockSigHeight" to blockchain.json but initially set to block 999999
pending decision on when to merge, auto-update, go-live, etc.
Symptoms of a CHECKPOINT-related DB deadlock:
On Controller thread:
"Controller" #20 prio=5 os_prio=31 cpu=1577665.56ms elapsed=17666.97s allocated=475G defined_classes=412 tid=0x00007fe99f97b000 nid=0x1644b waiting on condition [0x0000700009a21000]
java.lang.Thread.State: WAITING (parking)
at jdk.internal.misc.Unsafe.park(java.base@14.0.2/Native Method)
- parking to wait for <0x0000000602f2a6f8> (a org.hsqldb.lib.CountUpDownLatch$Sync)
[...some more lines...]
[this next line is the best indicator: ]
at org.qortal.repository.hsqldb.HSQLDBRepository.checkpoint(HSQLDBRepository.java:385)
at org.qortal.repository.RepositoryManager.checkpoint(RepositoryManager.java:51)
at org.qortal.controller.Controller.run(Controller.java:544)
Other threads stuck at:
- parking to wait for <0x00000007ff09f0b0> (a org.hsqldb.lib.CountUpDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(java.base@14.0.2/LockSupport.java:211)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.base@14.0.2/AbstractQueuedSynchronizer.java:714)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(java.base@14.0.2/AbstractQueuedSynchronizer.java:1046)
at org.hsqldb.lib.CountUpDownLatch.await(Unknown Source)
at org.hsqldb.Session.executeCompiledStatement(Unknown Source)
Could have affected:
Controller.deleteExpiredTransactions()
Network.getConnectablePeer()
Network.opportunisticMergePeers()
Network.prunePeers()
Symptoms:
2021-02-12 16:46:06 WARN NetworkProcessor:152 - [1556] exception while trying to produce task
java.lang.NullPointerException: null
at org.qortal.repository.hsqldb.HSQLDBRepository.<init>(HSQLDBRepository.java:92) ~[qortal.jar:1.4.1]
at org.qortal.repository.hsqldb.HSQLDBRepositoryFactory.tryRepository(HSQLDBRepositoryFactory.java:97) ~[qortal.jar:1.4.1]
at org.qortal.repository.RepositoryManager.tryRepository(RepositoryManager.java:33) ~[qortal.jar:1.4.1]
at org.qortal.network.Network.getConnectablePeer(Network.java:525) ~[qortal.jar:1.4.1]
This is a serious change as it affects many callers.
Controller.onNetworkTransactionMessager()
-- should be OK to block as it's only one EPC thread
TransactionsResource.processTransaction()
-- should be OK to block as it's only one Jetty thread
AND has its own 30s timeout wrapper anyway
Implementations of AcctTradeBot.progress()
e.g. BitcoinACCTv1TradeBot.progress() & LitecoinACCTv1TradeBot.progress()
TradeBot.updatePresence()
-- these are called via BlockMinter/Synchronizer
when blockchain lock is already held, so will are unaffected
AcctTradeBot.createTrade() and AcctTradeBot.startResponse()
-- these are called via API
and previously would only perform non-blocking blockchainLock.try()
but now perform blocking blockchainLock.lock()
thus preventing NO_BLOCKCHAIN_LOCK when creating/responding to trades
especially after a (wasted) MESSAGE PoW compute
but with potential downside that API response might be delayed
e.g. by a very slow sync round
Future work could look into removing the need for blockchain lock
when calling Transaction.importAsUnconfirmed().
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
According to Bitcoin source, CheckFinalTx() in validation.cpp ~line 223,
we need to make sure median blocktime has passed P2SH refund transaction's
nLockTime.
Previously we were erroneously checking that median blocktime was in the past.
This should fix issues where refunding P2SH results in a "non-final" error
from the ElectrumX server network.
PRESENCE transactions were previously validated using Bob's trade key (in address form).
But as PRESENCE transactions are already emitted by Alice, her trade key is also used
(if present in trade data by virtue of AT being locked to Alice).
Similarly, Alice's trade-bot won't even try to build PRESENCE transactions if her
trade key isn't publicly visible to other peers, i.e. after AT is locked to Alice.
This aids matching PRESENCE to corresponding trade offers for use in UI.
Also tighten up visibility of some fields in ChainChainOfferSummary and
PresenceInfo to private.
PresenceInfo.address should map to CrossChainOfferSummary.qortalCreatorTradeAddress
which is "AT creator's ephemeral trading key-pair represented as Qortal address"
Add caching of transactions fetched via ElectrumX to reduce network load and speed up API response.
Fix handling ElectrumX servers that don't want to supply verbose transaction JSON.
Hide lots of data in BitcoinyTransaction that isn't needed by current API users.
Previous fixes for "transaction rollback: serialization failure" when updating trim heights
in commits 16397852 and 58ed7205 had the right idea but were broken due to being synchronized
on different objects.
this.repository.trimHeightsLock would be a new Object() for each repository connection/session
and so not actually synchronize concurrent updates.
Implicit saveChanges()/COMMIT is still needed.
Fix is to use a repository-wide object for synchronization - in this case the repositoryFactory
object as held by RepositoryManager.
Added test to cover.
Also reduced DB trim height read to one call at start of thread for both trimming threads.
Under certain conditions, e.g. non-existent database files, the repository would be created
and then immediately be re-created.
Not only was this unnecessary, but HSQLDBDatabaseUpdates would attempt to export the node-local
data twice, which would cause an error due to existing .script files.
The fix is three-pronged:
1. Don't immediately rebuild the repository if it's only just been built
2. Don't export the empty node-local data if repository has only just been built
3. Don't export the node-local data if it's empty
This involves a database reshape, but before this happens the node-local
data is exported to local files, giving the user the option to use a
bootstrap file instead of waiting.
apiKey in settings is null by default at this point, for backwards compatibility.
In the future, the Windows installer could generate a UUID for apiKey.
apiKey in settings needs to be at least 8 characters.
API calls in the documentation engine are now marked with an open/closed padlock
to show where API key might be required.
Add support for API key security, where X-API-KEY header must match apiKey from settings
apiKey in settings is null by default at this point, for backwards compatibility.
In the future, the Windows installer could generate a UUID for apiKey.
apiKey in settings needs to be at least 8 characters.
API calls in the documentation engine are now marked with an open/closed padlock
to show where API key might be required.
Checkpointing interval is 1 hour by default, changable in settings via
"repositoryCheckpointInterval"
plus corresponding "showCheckpointNotifications" SysTray flags (off by default).
Added entries to SysTray_en i18n properties, and converted SysTray_ru to ISO-8559-1.
Instead of searching from block 0, we now keep a record of
base trim height in the DB itself.
Also, we no longer trim the latest AT state for non-finished ATs
in case they are in deep sleeping and we need their state for when
they awaken.
Symptoms are:
* db/blockchain.log is pretty much exactly 50MB - the checkpoint-triggering size.
* Loads of threads are stuck waiting for HSQLDB's CountUpDownLatch$Sync.await()
* Synchronizer, or some other thread, possibly orphaning blocks.
The cause seems to be method A, which has a repository session,
calls EventBus.INSTANCE.notify() and one of the event listeners
then obtains their own repository session to do repository 'work'.
In the meantime, the HSQLDB log has reached 50MB, triggering auto-checkpoint.
HSQLDB attempts to CHECKPOINT, but waits for existing transactions
to complete, and also blocks starting new transactions.
Thus, one of the event listeners is blocked when they try to obtain
a new repository session, but HSQLDB never performs CHECKPOINT
because the event notifier (method A) still has an unfinished
transaction - hence deadlock.
Drop created_when column from ATStates as it never changes
and can be fetched from ATs table.
This takes about 50s on a fast machine.
Correspondingly rebuild height-based index on ATStates.
This takes about 3 minutes on a fast machine.
Modify AT-related repository methods and callers.
Aggressively remove 'old' (> 2 weeks) actual AT
state binary data, leaving only the hash in DB
(for syncing purposes). Seems to keep up with
syncing from another node on localhost.
Additional benefit is to speed up syncing if node has trade-bot entries,
as a batch of blocks processed within 30s will have the same HTLC responses
as neither Bitcoin nor Litecoin blocks are that fast. Once synced, the next
Qortal block (~60s) should pick up any new changes. Generally users will be
synced when using trade-bot anyway.
Also moved bitcoiny.getAddressTransactions(p2shAddress)
into BitcoinyHTLC.findHtlcSecret() so only called if secret,
or lack thereof, is not cached.
Added tests to cover caching.
Several cross-chain API calls moved into separate classes,
although most of the URLs remain roughly the same to provide
backwards compatibility.
API /crosschain/at/build moved into /crosschain/BitcoinACCTv1
Converted DELETE /crosschain/tradeoffer to be ACCT-agnostic.
Changes to ACCT interface, etc. to support above.
Changes applied to other crosschain API calls to make them
independent of Bitcoin/Litecoin.
Corrections to fee calculations and usage in BitcoinACCTv1.
Added new LitecoinACCTv1 trade-bot, using LitecoinACCTv1.
Some minor typo corrections, rename of secretHash to hashOfSecret.
Some more Bitcoin-specific fields deprecated, but values duplicated
from newly-named fields for now.
Lower default fee (10sats/byte) for Litecoin spending transactions.
(Not P2SH fees which are 1000sats).
Changed ApiError INSUFFICIENT_BALANCE HTTP status from 422 to 402
as 422 isn't supported by Jetty?
CrossChainTradeSummary.btcAmount deprecated, use: foreignAmount
Modified pom.xml to generated package-info.java files for classes
inside org.qortal.api.model.** subdirectories.
API support for Litecoin wallet balance and sending LTC.
TradeBotCreateRequest rejigged to use blockchain-agnostic
field names, e.g. bitcoinAmount now foreignAmount,
and added foreignBlockchain field.
The massive API CrossChainResource class has been split into:
CrossChainAtResource: for building TRADE/REDEEM/CANCEL messages
(OFFER missing?)
CrossChainBitcoinResource: for Bitcoin wallet balance/spend
CrossChainLitecoinResource: ditto for Litecoin
CrossChainHtlcResource: for Bitcoiny-HTLC actions like:
deriving P2SH address
checking HTLC status
eventually: building refund/redeem transactions
CrossChainResource: for creating/cancelling/listing trade offers.
CrossChainTradeBotResource: for creating/cancelling trade-bot
entries, including responding to trade offers.
---
Other general trading changes:
TradeBot states are now specific to each individual trade-bot,
e.g. BitcoinACCTv1TradeBot or LitecoinACCTv1TradeBot, etc.
TradeBot states now a combination of int & String, instead of
enums due to above.
Extra columns added to DB TradeBotStates to store
blockchain, which ACCT in use, etc.
---
UNTESTED at this point!
Extracted AcctMode from BitcoinACCTv1.Mode as the values are
common to both Bitcoin/Litecoin ACCTs.
Added test apps for deploy, cancel, trade and redeem of LitecoinACCTv1.
Added altcoinj library as Maven dependency.
Added new Litecoin subclass of Bitcoiny,
with mainnet and testnet ElectrumX server lists.
Added litecoinNet settings variable and getter.
Added LitecoinTests.
Most tests work but testFindHtclSecret()
needs a redeemed HTLC on chain (not yet done).
Added litecoinNet to some test settings files
in resources.
Added Litecoin BuildHTLC, Refund test apps.
Added SendLTC app as Electrum-LTC seems a bit flaky?
So far managed to build HTLC P2SH, fund it and then
refund it!
---
As Bitcoin and Litecoin are both subclasses of Bitcoiny,
could unify some test apps with added Bitcoin/Litecoin
switch as first arg?
Bitcoin/Litecoin common aspects extracted in a "Bitcoiny" common class.
So:
Bitcoin (was BTC) extends Bitcoiny
Litecoin (future code) will also extend Bitcoiny
ElectrumX is now a BitcoinyBlockchainProvider
to allow easier future replacement and also tidier integration.
BTCP2SH is now BitcoinyHTLC as they are generic hash time-locked contracts,
probably Bitcoin/Litecoin agnostic.
BTCACCT is now BitcoinACCTv1, allowing for v2+ and also LitecoinACCTv1, etc.
BitcoinTransaction is now BitcoinyTransaction
as they are pretty much the same in Litecoin.
BitcoinException is now a more generic ForeignBlockchainException.
---
Bitcoiny subclasses instantiate a new BitcoinyBlockchainProvider
when creating their singleton instance. They pass relevant network
details to the BBP, like server lists, genesis block hash, etc.
Bitcoiny.WalletAwareUTXOProvider now only has the one key search mode
that is equivalent to the old REQUEST_MORE_IF_ANY_SPENT.
Tests tidied up.
---
Still to do:
Modifying TradeBot to handle multiple types of ACCTs,
like BitcoinACCTv2, LitecoinACCTv1...
Modifying API to support multiple types of ACCTs.
Actually add Litecoin support.
Build new ACCT without needing P2SH-B if possible.
There's still an existing issue where log entries like this appear:
Unable to trim old online accounts signatures in repository
which is actually caused by:
integrity constraint violation: unique constraint or index violation; SYS_PK_10092 table: BLOCKS
which seems to be a bug in the version of HSQLDB we use.
(Tested using synced-from-scratch DB).
It's not clear what the actual problem is at this point.
It might be possible to switch to v2.5.1 if our recent HSQLDB-related
commits have fixed/worked-around the OOM issues.
Move the inner method from BlockChain to Controller.
Remove blockchain lock as it's not needed because it's not an
HSQLDB "serialization failure" but constraint violation.
Trimming old online accounts signatures limited to batches of 1440
rows to reduce CPU and memory load.
Added separate method to determine status of P2SH transactions,
returning UNFUNDED, FUNDING_IN_PROGRESS, REDEEMED, etc.
Added code to trade-bot to increase robustness. Lots more
changes including unified state change/logging, checking
for existing MESSAGEs, etc.
Added missing websocket methods to silence log noise.
Trade-bot now called per block during synchronization,
instead of per batch, to pick up edge cases where some
potential trade-bot transitions were missed, resulting
in failed trades.
Corresponding changes in Controller, such as notifying
event bus of new block in same thread (thus blocking)
instead of using executor.
Added slightly more robust common block determination
to Synchronizer.
Refactored code in BTC class to use new BitcoinException
rather than simply returning null, with added sub-classes
allowing differentiation between network issues or fund
issues.
Changed BTC.buildSpend to try harder to find UXTOs to
address false "insufficient funds" issues.
Repository change to add index on MessageTransactions
for quicker look-up of trade-related messages.
Reduced reliance on bitcoinj library in BTCP2SH.
Reworked ElectrumX to better detect errors rather than
continuously try more servers to no avail.
Also added genesis block check in case of servers on
different Bitcoin networks.
Now tries to extract upstream bitcoind error codes
and pass those up to caller via exceptions.
Updated list of testnet servers.
MemoryPoW now detects thread interrupt and exits fast.
Moved some non-generic transaction-related repository
methods to their own subclass. For example:
moved TransactionRepository.getMessagesByRecipient
to MessageRepository.getMessagesByParticipants
Updated and added more tests.
Was "run.sh" but renamed to "start.sh" to better complement "stop.sh".
"run.sh" is now a symbolic link to "start.sh"
Reworked Java version check to remove dependency on "bc" tool which
seems not to be installed on some Ubuntu distributions?
Removed -XX:NativeMemoryTracking flag from JVM args.
Fixed incorrect comment regarding java.net.preferIpV4Stack.
Fixed typo in comment.
Requires node shutdown, lots of time (10s of minutes), spare storage space.
Called via: java -cp qortal.jar org.qortal.RepositoryMaintenance
Not (yet) for general consumption.
Added Qortal-side HSQLDB PreparedStatement cache, hashed
by SQL query string, to reduce re-preparing statements.
(HSQLDB actually does the work in avoiding re-preparing
by comparing its own query-to-statement cache map, but we
need to keep an 'open' statement on our side for this to
happen).
Support added for batched INSERT/UPDATE SQL statements to
update many rows in one call.
Several specific repository calls, e.g. modifyMintedBlockCount
or modifyAssetBalance, now have batch versions that allow
many rows to be updated in one call.
In Block, when distributing block rewards, although we still
build a map of balance changes to apply after all calculations,
this map is now handed off wholesale to the repository to
apply in one (or two) queries, instead of a repository call
per account. The balanceChanges map is now keyed by account
address, as opposed to actual Account.
Also in Block, we try to cache the fetched online reward-shares
(typically in Block.isValid et al) to avoid re-fetching them
later when calculating block rewards.
In addition, actually fetching online reward-shares is no longer
done index-by-index, but the whole array of indexes is passed
wholesale to the repository which then returns the corresponding
reward-shares as a list.
In Block.increaseAccountLevels, blocks minted counts are also
updated in one single repository call, rather than one
repository call per account.
When distributing Block rewards to legacy QORA holders,
all necessary info is fetched from the repository in one hit
instead of two-phases of: 1. fetching eligible QORA holders,
and 2. fetching extra data for that QORA holder as needed.
In addition, updated QORT_FROM_QORA asset balances are done
via one batch repository call, rather than per update.
Symptoms include this in logs:
Unexpected zero effective minter level for reward-share %s - using 1 instead!
This occurs when Synchronizer compares two sub-chains from a common block,
and one of the blocks is signed by a reward-share key that has
subsequently been cancelled.
Although this is catered for, excessive log-spam is emited.
So in addition to demoting the log level from WARN to DEBUG,
more code has been added to try harder to find the actual data needed,
thus preventing the logging in the first place.
New repository transaction search method added to support above,
along with corresponding tests.
ApplyUpdate is the 2nd-stage of the auto-update system, called
after core has downloaded the update.
As old versions of the Windows launcher EXE selects a 'client'
JVM mode, heap memory could be limited to only 256MB.
Until users upgrade via Windows installer, which replaces the EXE
with 'server' JVM mode baked-in, then a work-around is to
pass -XX:MaxRAMFraction=4 to the new JVM in order to emulate
heap size in 'server' JVM mode.
Added "minimumTimestamp" param to same API call to allow fetching results for scenarios like:
* completed trades since midnight
* completed trades within last 24 hours
Added corresponding tests for above API call, including checking call response times.
Also improved BTC.WalletAwareUTXOProvider to derive more keys itself
instead of throwing and relying on caller to do the work.
Added benefit of cleaning up caller code and being more efficient.
Needed because not all receiving/change addresses were being picked up.
Also: renamed trade bot field/column "receiving_public_key_hash"
to "receiving_account_info" as Alice's trade bot uses it to
store Alice's Qortal address, not PKH.
Added some extra simplistic repository calls to support above,
like BlockRepository.getTimestampFromHeight,
ATRepository.getCreatorPublicKey(atAddress)
Bitcoin receive address no longer stored in AT but dealt with by trade-bot.
This allows 'Bob' to have his BTC sent anywhere he likes when redeeming P2SH-A
thus saving a step, typically incurred by UI. DB shape change due to this.
Similarly, AT code has been updated to expect a Qortal receiving address when
Alice sends MESSAGE to redeem AT.
This means both trade-bot entries (Alice/Bob) can be safely wiped once trade completes.
Some terms were confusing like "trade recipient" which actually referred to
Alice and so have been unified as "trade partner" as to not be confused with
(say) "recipient address"
The MESSAGEs sent from Alice to Bob, from Bob to AT and from Alice to AT have been
given more useful names: 'offer', 'trade' and 'redeem'. There is also a cancel
MESSAGE sent from Bob to AT to cancel AT before trading occurs.
Some API calls have been renamed in light of above.
AT's 'mode' has been expanded from simply OFFER/TRADE to:
OFFERING, TRADING, REFUNDED, REDEEMED, CANCELLED
Tests updated, but MORE TESTING REQUIRED BEFORE RELEASE
Various issues in Jetty v9.4.22 (and some later versions too)
cause websockets to use up all available threads.
Bumped Jetty to v9.4.29 to resolve some of these issues.
Changed some Qortal-side websocket code to minimize
locking on websocket notifiers. Websocket messages now
sent async, although the returned Futures are discarded,
as it's up to the remote end to consume fast enough.
Changed Controller to only request a SysTray update before
synchronization if there's a chance node might change height.
Similarly, Controller only requests SysTray update after
synchronization if chain tip has actually changed.
Both of the above together should reduce the number of
messages sent out via the admin status websockets.
Previous version fetched all the blocks from previous 'timestamp'
to current height, checking each transaction. (very slow)
New implementation leverages repository to do the heavy lifting.
Could potentially benefit from some DB indexes in the future?
Added unit test to cover.
For this commit, the included .aip file, and qortal.jar, match
what was used to produce the installer for release v1.2.2.
In a future commit, maybe remove qortal.jar as it is only included
here to illustrate current location in build tree.
Updates to .aip file could be, and maybe even should be, committed.
This build toolchain uses AdvancedInstaller v16 or better but
may require an (expensive) enterprise licence. It is possible
to obtain an 'open source'-use free licence from AdvancedInstaller
by contacting them directly. However this may result in restricted
functionality with AdvancedInstaller and some installer features,
e.g. multi-language support, may have be to removed.
Bitcoin main-net ElectrumX server list added to ElectrumX class,
albeit commented out at this point until it is decided that trade-bot
is ready for production use. (Simply remove the leading //s)
More comments and documentation has been added to TradeBot class
to further describe the actions taken.
It is important to note that:
Bitcoin wallet access is required by trade-bot
and so:
A Bitcoin WALLET PRIVATE KEY is stored in the database by trade-bot
and hence, if you use trade-bot:
DO NOT DISTRIBUTE YOUR DB FILES TO ANYONE ELSE!
Furthermore it should be obvious that this functionality is provided on
a 'best effort", not guaranteed, basis, therefore:
YOUR FUNDS ARE AT RISK!
If you are unsure about any aspect, or cannot afford to lose your funds,
or it's possible that unexpected outcomes occur, then DO NOT USE.
To use trade-bot on Bitcoin TESTNET then this to your settings JSON file:
"bitcoinNet": "TEST3",
See Settings.java line 100, and BTC class for more info.
bitcoinj now uses ElectrumX as an UTXO provider in order to keep track
of coins in BIP32 deterministic wallet.
Trade responder (Alice) needs to pass a BIP32 extended private key to API
so trade-bot can create unattended spends.
Both Alice and Bob can find their final funds in accounts using the
ephemeral 'tradePrivateKey' from trade-bot state data.
Most cross-chain API calls are now only allowed from localhost.
Most Bitcoin fees pegged at 0.00001000 BTC.
More work needed to handle refunds in case of trade failures.
(See XXX comment tags in TradeBot.java)
Qortal AT now includes suggested tradeTimeout again as a constant so trade partner/recipient can use that to calculate a suitable lockTimeA. CODE_HASH changed!
Renamed some secret_hash to hash_of_secret.
Changed TradeBotStates.trade_state back to TINYINT and adjusted values in TradeBotData.State enum to suit.
Added lockTimeA to TradeBotData & repository.
Added JAXB-only extra representations of Bitcoin PKHs as addresses.
Fixed incorrect expected length in BTCACCT.extractOfferMessageData().
CrossChainTradeData.refundTimeout now only present in TRADE mode.
Added BTC.pkhToAddress().
Added initial TradeBot.handleAliceWaitingForP2shA().
Enforce only one TradeBot thread running using 'activeFlag' atomic boolean.
Replace incorrect SHA256 with HASH160 for hashOfSecretA in TradeBot.startResponse().
Controller now calls TradeBot.onChainTipChange() inside thread
started by Controller.onNewBlock(), instead of blocking
Controller.setChainTip().
DB TradeBotStates has trade_foreign_public_key changed to VARBINARY(33)
as Bitcoin pubkeys aren't uniformly 32 bytes!
Also, trade_state changed from TINYINT to SMALLINT to cover enum value range.
TradeBot.createTrade() incorrectly used Crypto.digest() to create hash-of-secret
instead of Crypto.hash160(). Also corrected tradeState to
BOB_WAITING_FOR_AT_CONFIRM. Also added missing fee calculation.
Added missing repository.saveChanges() to TradeBot methods.
Added balance check to API POST /crosschain/tradebot before passing
request to TradeBot.createTrade(), which also ensures there's a
usable account last-reference too.
BTC.getBalance() now returns Long instead of Coin.
BTC.FORMAT.format(Coin) changed to BTC.format(Coin or long).
Added BTC.deriveP2shAddress(byte[] redeemScriptBytes).
Added GET_MESSAGE_LENGTH_FROM_TX_IN_A
and PUT_PARTIAL_MESSAGE_FROM_TX_IN_A_INTO_B.
Replaced AT-1.3.4 with version including bug-fix for off-by-one
data address bounds checking.
Moved long-from-bytes method to BitTwiddling class.
Renamed some methods to make it more obvious they work with
little/big endian data.
Also added Named/DaemonThreadFactory classes.
Network EPC now uses NamedThreadFactory for easier debugging.
Added settings field "networkPoWComputePoolSize", default 2, which
seems to work with both low-power ARM boards and high-power desktops.
If a node accepts a connection from an inbound peer
then remote peer will send RESPONSE first
and local node would previously change handshaking state
to COMPLETED while computing their own RESPONSE.
This meant that the local node would sometimes also start
sending post-handshake messages to the remote peer,
e.g. TRANSACTION_SIGNATURES.
Remote peer is only expecting a RESPONSE message, so would
close connection.
So we introduce an extra handshaking state "RESPONDING" for use
by local node while they compute RESPONSE in a separate thread.
Once the RESPONSE has been sent, local node moves to COMPLETED
state and called onHandshakeCompleted() as per usual.
Note that the code path when connecting outbound to a remote peer
is not changed, and the RESPONDING state is not used.
Also in this commit:
Network.onPeerReady now bypasses call to onMessage and instead
calls onHandshakingMessage() directly to avoid race condition
where peer's handshake status could change between
onPeerReady's caller and onMessage() calling peer.getHandshakeStatus()
NodeStatus contructor now fills in fields, which themselves are now 'final'.
NodeStatus also includes numberOfConnections and height as per systray.
AdminResource.status() unified with websocket version.
Incorrect column names when saving a group ban.
Missing column in LeaveGroupTransactions.
More stringent validity checks in group-kick, group-ban and remove-group-admin.
Added loads more tests to cover group actions.
Requires entries 'sslKeystorePathname' and 'sslKeystorePassword'
in settings.json.
With SSL enabled, API will auto-detect HTTP or HTTPs on the same port.
Included tools/build-keystore.sh to help build keystore from
Let's Encrypt certificates.
Renamed GET /blocks/minters to /blocks/signers
Renamed GET /blocks/minter/{address} to /blocks/signer/{address}
Changed corresponding repository methods and data classes.
Controller.onBlockMinted() now .onNewBlock(BlockData)
which saves having to fetch from repository.
Controller.onNewBlock also takes care of updating Controller's
cached chain tip, requesting SysTray refresh, broadcasting
new tip info to peers and notifying websockets.
BlockMinter and Controller.actuallySynchronize updated
to use unified .onNewBlock.
BlocksWebsocket also returns blocks on demand, given either
integer block height or base58 block signature.
Added support to return ApiError via websockets.
Unified Transaction.importAsUnconfirmed() and Controller.onNetworkTransactionMessage()
to both call Controller.onNewTransaction().
Modified Controller.onNewTransaction() to only send transaction signature to
other peers, instead of full transaction. Peers can request full transaction if they
don't have it.
Controller.onNewTransaction() also calls ChatNotifier, which in turn
notifies websocket handlers about new CHAT transactions.
Added jetty websocket dependency to pom.xml
Any reward leftover from ditributing to legacy QORA holders is reallocated to either:
founders if any online
or
account-level-based reward candidates, if no founders online
We should get pretty close to 100% block reward distribution, barring rounding artifacts.
More documentation and tests.
Removed BlockChain's founderShare as it is calculated in Block on a per-block basis instead.
Now we sum generic block reward + transaction fees before performing
distribution only once.
Added Map to collate account-balance changes during block reward
distribution so the final changes can be applied in one batch,
reducing DB load.
Some other optimizations like a faster ExpandedAccount.getShareBin().
Passes test EXCEPT RewardTests.testLegacyQoraReward(), pending decision
on how to reallocate 'unspent' block reward.
No more bitcoinj peer-group stalls, or slow startups,
or downloading tons of block headers, or checkpoint files.
Now we use ElectrumX protocol to query info from random servers.
Also:
BTC.hash160 callers now use Crypto.hash160 instead.
Added BitTwiddling.fromLEBytes() returns int.
Unit tests seem OK, but needs complete testnet ACCT walkthrough.
Old Qora v1 message types removed.
Message type values changed.
Network handshaking reworked to fix multiple-connections issue.
Instead of using some random peerID, we now use proper keypairs and a challenge-response handshake to prevent doppelgangers/ID-theft.
This results in simpler handshaking code as we don't have to perform some arcane doppelganger resolution.
Handshaking still uses proof-of-work for challenge-response, but switched to newer MemoryPoW.
API call GET /peers no longer has 'buildTimestamp' field, but does now have 'nodeId' field.
Network no longer has a whole raft of getXXXpeers() due to simplified handshaking.
Quite a few method calls changed to simply Network.getHandshakedPeers(), which is also faster.
Previously GET /chats/active/{address} would only return an active group chat
entry where 'address' was a member AND there was an existing CHAT
transaction with the same tx_group_id (and no recipient).
Now the response contains entries for ALL groups where 'address' is a member,
regardless of an existing CHAT transactions, omitting the 'timestamp' entry
if there are none.
CREATE_GROUP, ISSUE_ASSET, REGISTER_NAME and UPDATE_NAME transactions affected.
The code to actually generate 'reduced' name was called inside isValid() and
relied on setting the corresponding transaction data object field so that it would
be saved by isValid()'s caller. Although this worked, it wasn't a very clean
solution.
Now the 'reduced' name is generated by transaction data object's constructors so
it is always present.
Also removed name/group/asset reduceName(String) methods as they were all the
same single-line call to Unicode.sanitize().
Group owner now derived from CREATE_GROUP transaction creator's public key.
Added 'reduced' group name to GroupData, with corresponding change to DB.
Renamed GroupData.getIsOpen() to simply isOpen().
Tidied up CreateGroupTransactionData, adding 'reduced' group name.
Renamed getIsOpen() to simply isOpen().
Added code to generated reduced group name when building genesis block.
Added Group.MIN_NAME_SIZE of 3.
DB tables changed to add reduced_group_name where appropriate,
removing owner where necessary.
Added GroupRepository.reducedGroupNameExists(String).
Fixed up test blockchain configs in src/test/resources/test-chain-v2*.json.
This allows on-chain messages to a group, including NO_GROUP / groupID zero.
No-recipient messages cannot have an amount - where would it go?
Changed MESSAGE serialization layout to add boolean indicating
whether recipient is present.
Changed MESSAGE serialization layout so assetID is after amount,
and only present if amount is non-zero.
Changed DB table structures to cover above.
Added unit tests to cover above.
Owner now derived from issuer's public key.
Maximum asset name length reduced to 40 characters.
Repository table changes.
"owner" removed from test blockchain configs and "issuerPublicKey" used instead
where applicable.
Some getters in the form of "getIs___()" renamed to simply "is____()".
Changes include:
* Allowing renaming
* Tracking last-updated timestamps
* More stringent Unicode processing
* Way more unit tests
* Max name length reduction to 40 chars
Note: HSQLDB repository table changes
Controller no longer starts up BTC support during main startup.
This does mean that BTC startup is deferred until first BTC-related
action, and that the first BTC-related action will take much longer
to complete.
Added tests to cover startup/shutdown.
This also fixes splash logo stuck on-screen and broken Controller
shutdown when using REGTEST bitcoin network AND there is no
local regtest bitcoin server running.
REGISTER_NAME has an "owner" field which can be different from the actual
registrant (transaction creator's public key, used for signing transaction).
This allowed people to register names to be owned by someone else, thus breaking
the whole "one name per account" aspect.
So now "owner" is removed from REGISTER_NAME, and the actual owner address is
derived from transaction creator's public key, as you would expect.
Similarly, UPDATE_NAME has a corresponding "newOwner" field which has been removed.
In addition, UPDATE_NAME now allows users to change their registered name using a new
"newName" field.
Various changes made to DB, Name class, etc. to accomodate above, along with some minor
bug-fixes and comment improvements/corrections.
Needs new unit tests to cover both new functionality and old!
Always add group 0 info to output of API call GET /chats/active/{address}.
No groupName entry as it's "no group" or "group-less" or "not group related".
Timestamp also might be omitted if no message found.
Fix output of POST /chats/compute so it doesn't include zeroed 64-byte signature.
Renamed GET /chats/search to /chats/messages.
Added GET /chats/active/{address} to return lists of group chats
and direct chats involving {address}, where a chat message exists.
Change CHAT API call GET /chat/search to better support the two
main scenarios of:
group-based chatting: supply txGroupId only
private chatting: supply 2 'involving' addresses only
Added some DB indexes to cater for above.
GET /chat/search now returns specialized ChatMessage objects
instead of ChatTransactions. This is to reduce unnecessary fetching
of data from repository, and onward sending to API client.
Previously Controller would loop through the transaction signatures,
discard those already known, and then requesting the full transaction
via peer.getResponse(). This would tie up a networking thread for some
time and also potentially cause repository deadlocks, although the latter
could have been fixed another way.
However, the code after peer.getResponse() was identical to the code
processing an incoming TRANSACTION message. Now instead of requesting
and waiting for then processing each transaction, Controller simply
sends the peer a GET_TRANSACTION for each unknown transaction signature.
As the peer responds with corresponding TRANSACTION messages, these can
be processed individually with shorter period of locking.
When using fixed NTP offset, e.g. via "testNtpoffset" in settings.json,
Controller calls NTP.shutdownNow() which throws a NPE because
NTP.instanceExecutor is null.
Collated all development changes to DB so now we build
initial DB structure directly with final layout.
i.e. no ALTER TABLE, etc.
Reordered HSQLDB 'CREATE TYPE' statements into alphabetical order
for easier maintainability.
Replaced TIMESTAMP WITH TIME ZONE with simple BIGINT ("EpochMillis").
Timezone conversion is now a presentation task, rather than having
pretty values in database.
Removed associated conversion methods, like toOffsetDateTime(),
fromOffsetDateTime() and getZonedTimestampMilli().
Renamed some DB columns to make them more obviously timestamps, like:
Names.registered is now Names.registered_when.
Removed IFNULL(balance, 0) from HSQLDBAccountRepository as balances
are never null, or actually never 0 either.
Added more tests to increase API call, and hence repository, coverage.
Removed unused "milestone block" from Transactions.
In some cases, a freshly cancelled reward-share could still have
an associated signed timestamp. Block.mint() failed to spot this
and used an incorrect "online account" index when building the
to-be-minted block.
Block.mint() now checks that AccountRepository.getRewardShareIndex()
doesn't return null, i.e. indicating that the associated reward-share
for that "online account" no longer exists.
In turn, AccountRepository.getRewardShareIndex() didn't fulfill its
contract of returning null when the passed public key wasn't present
in the repository. So this method has been corrected also.
AccountRepository.rewardShareExists(byte[] publicKey) : boolean added.
BlockMinter had another bug where it didn't check the return from
Block.remint() for null properly. This has been fixed.
BlockMinter now has additional logging, with cool-off to prevent log
spam, for situations where minting could not happen.
Unit test (DisagreementTests) added to cover cancelled reward-share
case above. BlockMinter testing support slightly modified to help.
Brought more into line with isValidUnconfirmed().
No need to update creator's lastReference under new last-ref scheme.
Correspondingly, no need to acquire blockchain lock or repository
shenanigans in getUnconfirmedTransactions() and getInvalidTransactions()
for the same reason.
getInvalidTransactions() seems to be unused and may well be cleaned up
in a future commit.
Change code of the form (assetId aspect not shown):
account.setConfirmedBalance( account.getConfirmedBalance(), amount )
to:
account.modifyAssetBalance( amount )
Also tidied "0 - value" to use unary negate: "- value"
Moved Asset.MULTIPLIER, etc. to Amounts class.
Had to reintroduce BigInteger for asset trading code.
Various helper methods added to Amounts class.
Payment.process/orphan no longer needs unused transaction
signature or reference.
Added post block process/orphan tidying, which currently deletes zero account balances to satisfy post-orphan checks in unit tests.
Fix for possible bug when orphaning TRANSFER_PRIVS.
Added RewardSharePercentTypeAdapter like AmountTypeAdapter.
Replaced a whole load of JAXB-special getters with type-adapters.
Tests looking good!
Now possible thanks to removing Qora v1 support.
Maximum asset quantities now unified to 10_000_000_000,
to 8 decimal places, removing prior 10 billion billion
indivisible maximum.
All values can now fit into a 64bit long.
(Except maybe when processing asset trades).
Added a general-use JAXB AmountTypeAdapter for converting
amounts to/from String/long.
Asset trading engine split into more methods for easier
readability.
Switched to using FIXED founder block reward distribution code,
ready for launch.
In HSQLDBDatabaseUpdates,
QortalAmount changed from DECIMAL(27, 0) to BIGINT
RewardSharePercent added to replace DECIMAL(5,2) with INT
Ripped out unused Transaction.isInvolved and Transaction.getAmount
in all subclasses.
Changed
Transaction.getRecipientAccounts() : List<Account>
to
Transaction.getRecipientAddresses() : List<String>
as only addresses are ever used.
Corrected returned values for above getRecipientAddresses() for
some transaction subclasses.
Added some account caching to some transactions to reduce repeated
loads during validation and then processing.
Transaction transformers:
Changed serialization of asset amounts from using 12 bytes to
now standard 8 byte long.
Updated transaction 'layouts' to reflect new sizes.
RewardShareTransactionTransformer still uses 8byte long to represent
reward share percent.
Updated some unit tests - more work needed!
CHAT transactions don't ever get included into a block.
They use a memory-intensive proof-of-work instead of a fee.
Reference field isn't checked but must be present.
Recipient is optional.
isText/isEncrypted as per MESSAGE, basically indicative flags only.
Some API support.
Memory PoW takes roughly 800ms on Ryzen 3600, maybe 2400ms on QORTector?
As this changes how lastReferences are checked and updated,
this is not suitable for rolling into current chain without a
"feature trigger", or chain restart!
Added unit tests.
NullAccount has 'empty' public key (32 bytes of zeros) compared
with GenesisAccount's vague sometimes 8 bytes, sometimes 32 bytes
public key.
NullAccount has static public key and address, plus overridden
methods to speed up pointless calls like verify().
Genesis Block also tidied up, dropping old Qora v1 compatibility
and using proper block signature and public key to generate
minter's block signature.
Genesis Block transaction processing also simplified, with no need
to access repository to handle fake references, due to new
last-reference code (which will need to be merged).
Dropped support for old, broken RMD160 code.
Qortal is never going to continue off the old Qora blockchain,
so removed all code regarding compatibility.
Removals include:
* various blockchain "feature triggers"
* special Qora-only broken code for various transaction signatures
* "old" asset pricing / trading
* pre-group txGroupId field in transactions
* compatibility unit tests
Possibly safe for roll-out on pre-genesis blockchain?
Tidied up duplicated cross-chain API code that
fetched Qortal AT info.
Added Bitcoin-related cross-chain API calls
for building, checking, refunding and redeeming
P2SH.
Added new Bitcoin-related API error codes.
Controller now starts up, and shuts down, bitcoinj.
Speed-up in BTC class so bitcoinj doesn't have
to throw away all peers and rediscover & reconnect
to them with every chain-related call.
Added API calls to aid Qortal-side of cross-chain trading.
POST /crosschain/build - for building Qortal AT
POST /crosschain/tradeoffer/recipient - for sending trade partner/recipient to AT
POST /crosschain/tradeoffer/secret - for sending secret to AT
DELETE /crosschain/tradeoffer - for cancelling AT
More fixes regarding Blocks processing/orphaning ATs.
More fixes regarding sending/receiving blocks containing AT data.
AT-related fix to genesis block.
Improved cross-chain trading AT code, removing offer-mode timeout
and replacing that with allowing AT creator to cancel offer/end AT
by sending AT the creator's own address as trade partner/recipient.
After all, they're not going to trade with themselves.
Added assertion to check BTCACCT.CODE_BYTES_HASH matches compiled code hash.
Added cross-chain AT's 'mode' for easier diagnosis, either OFFER or TRADE.
We can't use AT's signature to generate AT address because address is needed
before DEPLOY_AT transaction is signed. So we use a hash of signature-less
transaction bytes.
Corresponding changes to tests.
Reworked the cross-chain trading AT so it is now 2-stage:
stage 1: 'offer' mode
waiting for message from creator containing trade partner's address
stage 2: 'trade' mode
waiting for message from trade partner containing secret
Adjusted unit tests to cover above.
Changed QortalATAPI.putCreatorAddressIntoB from storing
creator's public key to actually storing creator's address.
Refactored BTCACCT.AtConstants to CrossChainTradeData.
Now we also store hash of AT's code bytes in DB so we can look up
ATs by what they do. Affects ATData class, ATRepository, etc.
Added "Automated Transactions" and "Cross-Chain" API sections.
New API call GET /at/byfunction/{codehash} for looking up ATs
by what they do, based on hash of their code bytes.
New API call GET /at/{ataddress} for fetching info for specific AT.
New API call GET /at/{ataddress}/data for fetch an AT's data segment.
Mostly for diagnosis of AT's current state.
New API call POST /at for building a raw, unsigned DEPLOY_AT transaction.
New API call GET /crosschain/tradeoffers for finding open BTC-QORT trading ATs.
We require AT v1.3.4 now!
Updated AT-related logging.
Added "isInitial" flag to AT state data so that state data created at
deployment is not added to serialized block data.
Updated BTC-QORT AT code and tests to cover various scenarios.
Added missing 'testNtpOffset' to various test versions of 'settings.json'.
Added missing 'ciyamAtSettings' to various test blockchain configs.
Loads of AT-related additions/fixes/etc. to core code, e.g Block
Requires fix in CIYAM AT v1.3.2
New version of Qortal cross-trade AT code.
Change how Qortal addresses are managed in QortalATAPI from using
base58 strings (that are too long) to using hex form (25 bytes)
as they need to fix into 32 byte A/B register.
Generate AT addresses using DeployAtTransaction's signature instead
of convoluted hash of AT data like name, description, etc.
Add startTime as arg to GetTransaction test app.
Add missing fields (name, description, ATType, tags) to DeployAT test app.
Bump CIYAM AT requirement to v1.3
Remove multi-blockchain AT aspect for now (BlockchainAPI).
For PUT_PREVIOUS_BLOCK_HASH_INTO_A we no longer use SHA256 to condense 64-byte block signature into 32 bytes.
Now we put block height into A1 and SHA192 of signature into A2 through A4.
This allows possible future lookup of block data using "block hash", with verification that it is the same block.
Some AT functions use "address in B" but sometimes we populate B with account's public key instead.
So the method "getAccountFromB" is smart and checks for an actual, textual address in B starting with 'Q', otherwise assumes B contains public key.
The Settings field "useBitcoinTestNet" (boolean) now replaced with "bitcoinNet" (String) with possible values MAIN (default), TEST3, REGTEST.
This allows for more varied development/testing scenarios.
Use correct Bitcoin nSequence value 0xFFFFFFFE for P2SH, i.e. enable locktime, disable RBF.
Roll REGTEST checkpoints file generator into main BTC class.
Yet another rewrite of Bitcoin P2SH scripts for BTC-QORT cross-chain trading.
Added associated test classes BuildP2SH, CheckP2SH, DeployAT (unfinished).
Streamlined BTC class and switched to memory block store.
Split BTCACCTTests into BTCACCT utility class and (so far)
three stand-alone apps: Initiate1, Refund2 and Respond2
Moved some Qortal-specific CIYAM AT constants into blockchain config.
Removed redundant BTCTests
Bump bitcoinj to 0.15.5 for fixes.
lockTime is int (seconds since epoch), not long (ms since epoch).
Improve output of Initiate1.
Added (most of) Respond2.
If the timestamp-pubkey-sig is still 'current' then it'll be in
Controller's list of current online accounts, so we can quickly scan
that list before falling back to the more expensive Ed25519 verify.
Added equals() and hashCode() to OnlineAccountData to support above.
When synchronizing is forced via API call, the SysTray doesn't update
to reflect this.
We fix this by moving the SysTray updating code from
Controller.potentiallySynchronize() to the inner method
Controller.actuallySynchronize(), which is also the method called
directly by the API.
Previously BlockMinter & Synchronizer would both try opportunistic
locking, with no wait/timeout or fairness.
This could lead to a situation where a majority of nodes are
synchronizing, albeit only the top 1 or 2 blocks, but no node
manages to mint within the 'recent' period, so the chain stalls.
However, if a node is at/near the top of the chain then synchronization
shouldn't take very long so we let BlockMinter wait until to 30s
(approx. half typical block time) to obtain lock.
This makes minting blocks more likely in a BlockMinter/Sync fight
which helps keep the chain going.
Detecting chain stalls, and allowing minting if we have plenty of peers,
also produces blockchain 'islands' so isn't a simple fix at this point.
Bumped TCP timeouts for fetching auto-update from 5s (connect) and
3s (read) to 30s (connect) and 10s (read) to allow for nodes with
slower internet connections.
Increased interval between checking for auto-updates from 5 minutes
to 20 minutes to reduce load on update sources and also to reduce
the number of nodes that restart at any one time.
Obviously this new checking interval will only apply after the NEXT
auto-update...
Used when checking that node has shutdown and when replacing old JAR with new update.
ApplyUpdate previously waited 5 seconds between checks/retries, for up to 5 times: 25 seconds.
Now waits 10 seconds, for up to 12 times: 120 seconds.
Hopefully this will give slower nodes enough time to shut down and prevent errors like these on Windows installs:
2020-03-24 12:05:50 INFO ApplyUpdate:114 - Unable to replace JAR: qortal.jar: The process cannot access the file because it is being used by another process.
Added a setting "showBackupNotification", which is false by default,
that shows a tray notification when a repository backup occurs.
Above notification, and the auto-update notification, now refer to
the SysTray i18n translation lookup resources.
Typical users don't need quite so many connections, so minOutboundPeers and
maxPeers reduced accordingly.
maxNetworkThreadPoolSize increased from 10 to 20.
5s is way too long, and even 2s might still be considered excessive.
However, reducing the timeout might also reduce the number of
network engine "spawn failures" due to too many threads tied up
waiting for ping responses from overloaded peers.
Does not affect peer handshaking: that has a separate timeout.
Added Ed25519 private key to public key function accessible from SQL.
Added Ed25519 public key to Qortal address function accessible from SQL.
Used above functions to store minting account public key in SQL to
reduce the number of unnecessarily repeated Ed25519 conversions.
Used above functions to store reward-share minting's accounts address
to reduce the number of unneccessarily repeated PK-to-address conversions.
Reduced the usage of PublicKeyAccount to simply Account where possible,
to reduce the number of Ed25519 conversions.
Account.canMint(), Account.canRewardShare() and Account.getEffectiveMintingLevel()
now only perform 1 repository fetch instead of potentially 2 or more.
Cleaned up NTP main thread to reduce CPU load.
A fixed offset can be applied to NTP.getTime() responses, for both
scenarios when NTP is running or not. Useful for testing or simulating
distant remote peers.
Controller.onNetworkMessage() and Network.onMessage() have both had their
complexity simplified by extracting per-case code to separate methods.
Network's EPC engine's thread pool size no longer hard-coded, but comes
from Settings.maxNetworkThreadPoolSize, which is still 10 by default,
but can be increased for high-availability nodes.
Network's EPC task-producing code streamlined to reduce CPU load.
Generally reduced calls to System.currentTimeMillis(), especially
where the value would only be used in verbose logging situations,
and especially in high-call-volume methods, like within repository.
Keep track of when EPC engine can't spawn a new thread as this
might indicate thread-pool exhaustion and cause some network
messages to be lost.
If logging level is NOT 'trace' (or 'all') then don't call
System.currentTimeMillis() as we'll never use the value.
Similarly, don't set thread names if not logging at 'trace' either.
Update EPC tests, particularly unified per-second/end-of-test stats
reporting.
Added API call GET /peers/enginestats to allow external monitoring.
Extract all engine stats in one synchronized block, instead of
separate calls, for better consistency.
Synchronizer now bails out early when trying to find common block with
a peer. There's no need to keep searching if common block is too far
behind that a TOO_DIVERGENT result would be returned.
fetchSummariesFromCommonBlock() reworked to return a useful
SynchronizationResult directly instead of caller trying to infer
what happened based on null/empty returned list!
Added API call GET /admin/status which reports whether minting
is possible (have minting keys & up-to-date) and whether node is
currently attempting to sync.
Corresponding change to system tray mouseover text.
Corresponding text added to SysTray transaction resources.
Previously BlockMinter would attempt to mint if there were at least
'minBlockchainPeers' connected peers and none of them had an
up-to-date block and we did. This was maybe useful for minting block 2
but possibly causes minting chain islands where a badly connected
node mints by itself, even though connected to not up-to-date peers.
Now BlockMinter requires 'minBlockchainPeers' up-to-date peers, not
simply just connected. This should let synchronization bring the
node up-to-date but does require the node to have better peers.
Currently, the default for minBlockchainPeers is 10. So a node
requires 10 up-to-date peers before it will consider minting. It
might be possible to reduce this in the future to lessen network load.
Previous images had a small hole in the icon (probably a result of a background removal), I just filled it back in with white like it's supposed to be. Also, the previous square icons were streched into a square aspect ratio, these are unstreched.
Although HSQLDB is happy being given unix-style path separator '/'
and converting as necessary on other platforms (e.g. Windows),
manipulation of repository pathnames in Java, outside of HSQLDB,
needs to use platform-specific path separators.
Thus, changes made to replace '/' with File.separator where
necessary.
This should fix repository rebuild errors, which then lead to odd
start-up errors like:
2020-03-11 13:55:19 INFO Controller:270 - Starting repository
2020-03-11 13:55:20 INFO Controller:287 - Validating blockchain
2020-03-11 13:55:20 INFO HSQLDBRepository:227 - Rebuilding repository from scratch
2020-03-11 13:55:20 INFO GenesisBlock:296 - Using genesis block timestamp of 1583870000000
2020-03-11 13:55:21 WARN HSQLDBRepository:720 - Uncommitted changes (882) after connection close, session [3]
java.lang.NullPointerException
at org.qortal.transform.block.BlockTransformer.decodeOnlineAccounts(BlockTransformer.java:422)
at org.qortal.block.Block.getExpandedAccounts(Block.java:546)
at org.qortal.block.Block.increaseAccountLevels(Block.java:1245)
at org.qortal.block.Block.increaseAccountLevels(Block.java:1239)
at org.qortal.block.Block.process(Block.java:1206)
at org.qortal.block.GenesisBlock.process(GenesisBlock.java:345)
at org.qortal.block.BlockChain.rebuildBlockchain(BlockChain.java:526)
at org.qortal.block.BlockChain.validate(BlockChain.java:481)
at org.qortal.controller.Controller.main(Controller.java:289)
The above happens because the old blockchain still exists when trying to process
the genesis block.
AdvancedInstaller's Java launcher EXE seems to use JNI to launch
the JAR, instead of using the command-line 'java' binary directly.
When AI's launcher does this, it adds options like "abort" and "exit",
along with corresponding hook addresses.
These options are returned by the call to
ManagementFactory.getRuntimeMXBean().getInputArguments() which is
done in AutoUpdate while building the command line for launching
ApplyUpdate.
Because command-line 'java' binary doesn't support these options,
they are now stripped out.
No more "node UI". UI provided by 3rd party.
"Open UI" tray icon menu item now attempts to open UI at various
local servers (see Settings.uilocalServers) or some random
remote server (Settings.uiRemoteServers).
Default UI port now 12388 (Settings.uiPort).
<STRING lang="en" value="This is the folder where the blockchain, and other data, will be stored."/>
<STRING lang="ru" value="Это папка, в которой будет храниться блокчейн и другие данные."/>
<STRING lang="zh" value="这里是区块链及其它数据存放的文件夹"/>
<STRING lang="zh_TW" value="这里是区块链及其它数据存放的文件夹"/>
</ENTRY>
<ENTRY id="Control.Text.DataFolderDlg#Text">
<STRING lang="en" value="To store data in this folder, click "[Text_Next]". To store data in a different folder, enter it below or click "Browse"."/>
<STRING lang="ru" value="Чтобы сохранить данные в этой папке, нажмите "[Text_Next]". Чтобы сохранить данные в другой папке, введите ее ниже или нажмите "Обзор"."/>
throwApiExceptionFactory.INSTANCE.createCustomException(request,ApiError.UNAUTHORIZED,"Local requests not allowed when localAuthBypassEnabled is enabled in settings");
@Schema(description="Foreign blockchain. Note: default (BITCOIN) to be removed in the future",example="BITCOIN",implementation=SupportedBlockchain.class)
publicSupportedBlockchainforeignBlockchain;
@Schema(description="Foreign blockchain amount wanted in return",example="0.00864200",type="number")
@@ -100,42 +96,45 @@ public class AddressesResource {
@GET
@Path("/lastreference/{address}")
@Operation(
summary="Fetch reference for next transaction to be created by address, considering unconfirmed transactions",
description="Returns the base58-encoded signature of the last confirmed/unconfirmed transaction created by address, failing that: the first incoming transaction. Returns \"false\" if there is no transactions.",
summary="Fetch reference for next transaction to be created by address",
description="Returns the base58-encoded signature of the last confirmed transaction created by address, failing that: the first incoming transaction. Returns \"false\" if there is no last-reference.",
@Parameter(in=ParameterIn.PATH,name="path",description="Local path to folder containing the files",schema=@Schema(type="String",defaultValue="/Users/user/Documents/MyStaticWebsite"))
@Parameter(in=ParameterIn.QUERY,name="count",description="Maximum number of entries to return, 0 means none",schema=@Schema(type="integer",defaultValue="20"))
@Parameter(in=ParameterIn.QUERY,name="limit",description="Maximum number of entries to return, 0 means unlimited",schema=@Schema(type="integer",defaultValue="20"))
@Parameter(in=ParameterIn.QUERY,name="offset",description="Starting entry in results, 0 is first entry",schema=@Schema(type="integer"))
@@ -116,10 +117,41 @@ public class AdminResource {
summary="Fetch only summary info about a range of blocks",
description="Specify up to 2 out 3 of: start, end and count. If neither start nor end are specified, then end is assumed to be latest block. Where necessary, count is assumed to be 50.",
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.