This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.
It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
This improves scalability but isn't sufficient for a long term solution. TODO: It probably makes sense to add an additional query for recent transactions only, so that they are fetched quickly.
This is needed because we want to allow brand new accounts to publish data without a fee. A similar approach to CrossChainResource.buildAtMessage(). We already require PoW on all arbitrary transactions, so no additional logic beyond this should be needed.
This adds the loadAsynchronously() method to ArbitraryDataReader, in addition to the existing loadSynchronously() method.
When requesting a website in a browser, previously the building of the resource's layers would be done synchronously in the API handler. This understandably caused many issues, so the building is now done asynchronously by a dedicated thread. A loading screen is shown in its place which auto refreshes every second until the build has completed.
It's possible that this concept will struggle in the real world if operating systems, virus scanners, etc start interfering with our file stucture. Right now it is using a zero tolerance approach when checking the validity of each layer. We may choose to loosen this slightly if we encounter problems, e.g. by excluding hidden files. But for now it is best to be as strict as possible.
This decides whether to build a new state or use an existing cached state when serving a data resource. It will cache a built resource until a new transaction (i.e. layer) arrives. This drastically reduces load, and still allows for almost instant propagation of new layers.
This is used to store the transaction signature and build timestamp for each built data resource. It involved a refactor of the ArbitraryDataMetadata class to introduce a subclass for each file ("patch" and "cache"). This allows more files to be easily added later.
This defends against a missing or out-of-order transaction. If this ever fails validation, we may need to rethink the way we are requesting transactions. But in theory this shouldn't happen, given that the "last reference" field of a transaction ensures that out-of-order transactions are invalid already.