Increased GetOnlineAccountsMessage.MAX_ACCOUNT_COUNT from 1000 to 5000.
The V2 versions are more efficiently encoded and also cache the payload bytes
which reduces CPU when sending to multiple peers.
Serialization / deserialization unit tests included.
Tentative V2 message activation set at core version 3.1.2
see Controller.ONLINE_ACCOUNTS_V2_PEER_VERSION
This avoids duplicate entries from the same host/ip with differing ports. This can occur due to some requests using ephemeral port numbers. Ideally we would filter these out altogether, but this at least acts as a safety net to prevent a very cluttered db and associated "broadcast storm". The main tradeoff here is that multiple nodes on the same IP address will be recorded as a single entry. This doesn't seem like it will be a major limitation, because one of them will remain available.
Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
The simplest solution was to only include a newline at the end of the patch file if the source file ended with a newline. This is used to inform the merge code as to whether to add the newline to the end of the resulting file. Without this, the checksums do not match (and therefore previously the complete file would have been included as a result).
This should fix issue where it would take up to 30 seconds to return for a recent block, and would consume masses of CPU due to having to base58 encode the online accounts signatures. Base58 is very slow and made this API endpoint almost unusable for recent blocks, due to them having untrimmed online accounts signatures.
This makes them extremely generic, improves filenames, and makes it easier to create custom lists. It doesn't have backwards support, but the lists feature isn't working properly in core 2.1+ anyway.
Also modified the directory structure of single file resources to make them consistent with multi file resources.
For multi file resources, the original folder is renamed to "data", resulting in a layout such as:
data/file1.txt
data/file2.txt
data/dir1/file3.txt
For single file resources, the file is now moved into a "data" folder, like so:
data/file.txt
This is slightly unconventional, but is appropriate within the context of QDN to keep everything consistent.
A website must contain one of the following files in its root directory to be considered valid:
index.html
index.htm
default.html
default.htm
home.html
home.htm
This is the first page that is loaded when loading a Qortal-hosted website.
This would happen if a name fills their limit, and then additional names are followed. Alternatively it could happen if the total storage capacity reduces due to disk space being used by other apps. Chunks are deleted at random to reduce the chance of the same chunk being deleted everywhere. Data loss is possible here for transactions that don't have many peers. We'll have to see in practice how much of a problem this is, but it's better than the scenario where one content creator consumes all space on their followers' nodes, leaving no space for other names that are subsequently followed.
This is calculated by the total capacity divided by the number of names the node follows. The idea here is that a single content creator can't upload terabytes of data and consume all the space on their followers' nodes. They can only use a proportion, with equal space given to each followed name. And since the limit is dynamic, following more names reduces the allocation to existing names.
Chunk hashes are now stored off chain in a metadata file. The metadata file's hash is then included in the transaction.
The main benefits of this approach are:
1. We no longer need to limit the total file size, because adding more chunks doesn't increase the transaction size.
2. This increases the chain capacity by a huge amount - a 512MB file would have previously increased the transaction size by 16kB, whereas it now requires only an additional 32 bytes.
3. We no longer need to use variable difficulty; every transaction is the same size and so the difficulty can be constant no matter how large the files are.
4. Additional metadata (such as title, description, and tags) can ultimately be stored in the metadata file, as apposed to using a separate transaction & resource.
5. There is also scope for adding hashes of individual files into the metadata file, if we ever wanted to allow single files to be requested without having to download and build the entire resource. Although this is unlikely to be available in the short term.
The only real negative is that we now how to fetch the metadata file before we know anything about the chunks for a transaction. This seems to be quite a small trade off by comparison.
Since we're not live yet, there is no backwards support for on-chain hashes, so a new data testchain will be required. This hasn't been tested outside of unit tests yet, so there will likely be several fixes needed before it is stable.
Files are now keyed by signature, in the format:
data/si/gn/signature/hash
For times when there is no signature available (i.e. at the time of initial upload), files are keyed by hash, in the format:
data/_misc/ha/sh/hash
Files in the _misc folder are subsequently relocated to a path that is keyed by the resulting signature.
The end result is that chunks are now grouped on the filesystem by signature. This allows more transparency as to what is being hosted, and will also help simplify the reporting and management of local files.
publicDataEnabled - whether to store decryptable data (default true)
privateDataEnabled - whether to store data without a decryption key (default false)