This prevents a valid block candidate being discarded in favour of an invalid one. We can't actually validate a block before orphaning (because it will fail due to various reasons such as already existing transactions, an existing block with the same height, etc) so we will instead just check the signature against the list of known invalid blocks.
This should only be used if all of the following conditions are true:
a) Your node is private and not shared with others
b) Port 12391 (API port) isn't forwarded
c) You have granted access to specific IP addresses using the "apiWhitelist" setting
The node will warn on startup if this setting is used without a sensible access control whitelist.
Until now, a high weight invalid block can cause other valid, lower weight alternatives to be discarded. The solution to this problem is to track invalid blocks and quickly avoid them once discovered. This gives other valid alternative blocks the opportunity to become part of a valid chain, where they would otherwise have been discarded.
As with the block minter update, this will cause a fork when the highest weight block candidate is invalid. But it is likely that the fork would be short lived, assuming that the majority of nodes pick the valid chain.
If it has been more than 10 minutes since receiving the last valid block, but we have had at least one invalid block since then, this is indicative of a stuck chain due to no valid block candidates. In this case, we want to allow the block minter to mint an alternative candidate so that the chain can continue.
This would create a fork at the point of the invalid block, in which two chains (valid an invalid) would diverge. The valid chain could never rejoin the invalid one, however it's likely that the invalid chain would be discarded in favour of the valid one shortly after, on the assumption that the majority of nodes would have picked the valid one.
- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
This reverts commit 6d1f7b36a7af3d0c12085f5718b2d42162ffd521, reversing
changes made to 6b74ef77e610bb693c92655fc2dc0e0837e050ea.
# Conflicts:
# src/main/java/org/qortal/block/Block536140.java
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.
It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
This improves scalability but isn't sufficient for a long term solution. TODO: It probably makes sense to add an additional query for recent transactions only, so that they are fetched quickly.
This is needed because we want to allow brand new accounts to publish data without a fee. A similar approach to CrossChainResource.buildAtMessage(). We already require PoW on all arbitrary transactions, so no additional logic beyond this should be needed.