Problem:
The "Names" table (the latest state of each name) drifts out of sync with the name-related transaction history on a subset of nodes for some unknown and seemingly difficult to find reason.
Solution:
Treat the "Names" table as a cache that can be rebuilt at any time. It now works like this:
- On node startup, rebuild the entire Names table by replaying the transaction history of all registered names. Includes registrations, updates, buys and sells.
- Add a "pre-process" stage to block/transaction processing. If the block contains a name related transaction, rebuild the Names cache for any names referenced by these transactions before validating anything.
The existing "integrity check" has been modified to just check basic attributes based on the latest transaction for a name. It will log if there are any inconsistencies found, but won't correct anything. This adds confidence that the rebuild has worked correctly.
There are also multiple unit tests to ensure that the rebuilds are coping with various different scenarios.
This ensures that all name-related transactions have resulted in correct entries in the Names table. A bug in the code has resulted in some nodes having missing data in their Names table. If this process finds a missing name, it will log it and add the name.
Missing names are added, but ownership issues are only logged. The known bug wasn't related to ownership, so the logging is only to alert us to any issues that may arise in the future.
In hindsight, the code could be rewritten to store all three transaction types in a single list, but this current approach has had a lot of testing, so it is best to stick with it for now.
This is necessary because it's possible (in theory) for a block to be considered invalid due to an internal failure such as an SQLException. This gives them more chances to be considered valid again. 1 hour is more than enough time for the node to find an alternate valid chain if there is one available.
This prevents a valid block candidate being discarded in favour of an invalid one. We can't actually validate a block before orphaning (because it will fail due to various reasons such as already existing transactions, an existing block with the same height, etc) so we will instead just check the signature against the list of known invalid blocks.
This should only be used if all of the following conditions are true:
a) Your node is private and not shared with others
b) Port 12391 (API port) isn't forwarded
c) You have granted access to specific IP addresses using the "apiWhitelist" setting
The node will warn on startup if this setting is used without a sensible access control whitelist.
Until now, a high weight invalid block can cause other valid, lower weight alternatives to be discarded. The solution to this problem is to track invalid blocks and quickly avoid them once discovered. This gives other valid alternative blocks the opportunity to become part of a valid chain, where they would otherwise have been discarded.
As with the block minter update, this will cause a fork when the highest weight block candidate is invalid. But it is likely that the fork would be short lived, assuming that the majority of nodes pick the valid chain.
If it has been more than 10 minutes since receiving the last valid block, but we have had at least one invalid block since then, this is indicative of a stuck chain due to no valid block candidates. In this case, we want to allow the block minter to mint an alternative candidate so that the chain can continue.
This would create a fork at the point of the invalid block, in which two chains (valid an invalid) would diverge. The valid chain could never rejoin the invalid one, however it's likely that the invalid chain would be discarded in favour of the valid one shortly after, on the assumption that the majority of nodes would have picked the valid one.
- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
Now only skipping the HTLC redemption if the AT is finished and the balance has been redeemed by the buyer. This allows HTLCs to be refunded for ATs that have been refunded or cancelled.
Previously, if an error was returned from an Electrum server (such as "server busy") it would throw a NetworkException that would be caught outside of the server loop and cause the entire request to fail.
Instead of throwing an exception, I am now logging the error and returning null, in the same way we do for IOException and NoSuchElementException further up in the same method.
This allows the caller - most likely connectedRpc() - to move on to the next server in the list and try again.
This should fix an issue seen where a "server busy" response from a single server was essentially breaking our implementation, as we would give up altogether instead of trying another server.
This is a workaround for an UnsupportedOperationException thrown when using X2Go, due to PERPIXEL_TRANSLUCENT translucency being unsupported in splashDialog.setBackground(). We could choose to use a different version of the splash screen with an opaque background in these cases, but it is low priority.
Updated the "localeLang" files with new keys and removed old unused keys for English, German, Dutch, Italian, Finnish, Hungarian, Russian and Chinese translations
These are the same as the /lists/blacklist/address/{address} endpoints but allow a JSON array of addresses to be specified in the request body. They currently return true if
The ResourceList class creates or updates a list for the purpose of tracking resources on the Qortal network. This can be used for local blocking, or even for curating and sharing content lists. Lists are backed off to JSON files (in the lists folder) to ease sharing between nodes and users.
This first implementation allows access to an address blacklist only, but has been written in such a way that other lists can be easily added. This might be needed in the future, e.g. to blacklist a group, a poll, or some hosted data. It could also be used by community members to curate lists of favourite or problematic content, which could then be shared or even subscribed to on the chain by other users.
The inputs and outputs contain a simpler version than the ones in the raw transaction, consisting of `address`, `amount`, and `addressInWallet`. The latter of the three is to know whether the address is one that is derived from the supplied xpub master public key.
The previous criteria was to stop searching for more leaf keys as soon as we found a batch of keys with no transactions, but it seems that there are occasions when subsequent batches do actually contain transactions. The solution/workaround is to require 5 consecutive empty batches before giving up. There may be ways to improve this further by copying approaches from other BIP32 implementations, but this is a quick fix that should solve the problem for now.
This involved a small refactor of the ACCT code to expose findSecretA() in a more generic way. Bitcoin is disabled for refunding and redeeming as it uses a legacy approach that we no longer support. The {blockchain} URL parameter has also been removed from the redeem and refund APIs, because it can be obtained from the ACCT via the code hash in the AT.
The "dust" threshold is around 1 DOGE - meaning orders below this size cannot be refunded or redeemed. The simplest solution is to prevent orders of this size being placed to begin with.
This is probably our number one reliability issue at the moment, and has been a problem for a very long time.
The existing CHECKPOINT_LOCK would prevent new connections being created when we are checkpointing or about to checkpoint. However, in many cases we obtain the db connection early on and then don't perform any queries until later. An example would be in synchronization, where the connection is obtained at the start of the process and then retained throughout the sync round. My suspicion is that we were encountering this series of events:
1. Open connection to database
2. Call maybeCheckpoint() and confirm there are no active transactions
3. An existing connection starts a new transaction
4. Checkpointing is performed, but deadlocks due to the in-progress transaction
This potential fix includes preparedStatement.execute() in the CHECKPOINT_LOCK, to block any new transactions being started when we are locked for checkpointing. It is fairly high risk so we need to build some confidence in this before releasing it.