This is probably the most efficient way to process the data on the fly, but it's still not very scalable. A better approach would be to pre-process the HTML when building the file structure, and then serve them completely statically (i.e. using a standard webserver rather than via application memory). But it makes sense to keep it this way for development and maybe early beta testing.
This can be used to preview a site before signing a transaction and announcing it to the network. The response will need reworking to return JSON (along with most of the other new APIs)
This fixes an NPE when trying to send a file that doesn't exist. It also removes the caching, which we can add again later if it turns out to be needed.
This deletes a file referenced by a user supplied SHA256 digest string (which we will use as the file's "ID" in the Qortal data system). In the future this could be extended to delete all associated chunks, but first we need to build out the data chain so we have a way to look up chunks associated with a file hash.
We must be careful not to add files to the resources folder accidentally, given that a bundled log4j2.properties file is used in preference to the user's copy. By keeping this out of gitignore, it becomes more obvious if a file is added, and it can then be caught and removed before a release.
There were necessary for these scripts to function in my build environment (Mac OSX). This may give errors when running in other environments, but we can deal with that in future, when others need to use these scripts.
Including an older JAR in the source code only leads to confusion, because a zip of the source code is automatically included with each github release. From what I can see, there is no need for it to be here. Added to .gitignore so we have the option of keeping a local copy.
When sending or requesting more than 1000 online accounts, peers would be disconnected with an EOF or connection reset error due to an intentional null response. This response has been removed and it will instead now only send the first 1000 accounts, which prevents the disconnections from occurring.
In theory, these accounts should be in a different order on each node, so the 1000 limit should still result in a fairly even propagation of accounts. However, we may want to consider increasing this limit, to maximise the propagation speed.
Thanks to szisti for tracking this one down.