This reverts commit 6d1f7b36a7af3d0c12085f5718b2d42162ffd521, reversing
changes made to 6b74ef77e610bb693c92655fc2dc0e0837e050ea.
# Conflicts:
# src/main/java/org/qortal/block/Block536140.java
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.
It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
This improves scalability but isn't sufficient for a long term solution. TODO: It probably makes sense to add an additional query for recent transactions only, so that they are fetched quickly.
This is needed because we want to allow brand new accounts to publish data without a fee. A similar approach to CrossChainResource.buildAtMessage(). We already require PoW on all arbitrary transactions, so no additional logic beyond this should be needed.
This adds the loadAsynchronously() method to ArbitraryDataReader, in addition to the existing loadSynchronously() method.
When requesting a website in a browser, previously the building of the resource's layers would be done synchronously in the API handler. This understandably caused many issues, so the building is now done asynchronously by a dedicated thread. A loading screen is shown in its place which auto refreshes every second until the build has completed.
It's possible that this concept will struggle in the real world if operating systems, virus scanners, etc start interfering with our file stucture. Right now it is using a zero tolerance approach when checking the validity of each layer. We may choose to loosen this slightly if we encounter problems, e.g. by excluding hidden files. But for now it is best to be as strict as possible.
This decides whether to build a new state or use an existing cached state when serving a data resource. It will cache a built resource until a new transaction (i.e. layer) arrives. This drastically reduces load, and still allows for almost instant propagation of new layers.