Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
The simplest solution was to only include a newline at the end of the patch file if the source file ended with a newline. This is used to inform the merge code as to whether to add the newline to the end of the resulting file. Without this, the checksums do not match (and therefore previously the complete file would have been included as a result).
This should fix issue where it would take up to 30 seconds to return for a recent block, and would consume masses of CPU due to having to base58 encode the online accounts signatures. Base58 is very slow and made this API endpoint almost unusable for recent blocks, due to them having untrimmed online accounts signatures.
This makes them extremely generic, improves filenames, and makes it easier to create custom lists. It doesn't have backwards support, but the lists feature isn't working properly in core 2.1+ anyway.
Also modified the directory structure of single file resources to make them consistent with multi file resources.
For multi file resources, the original folder is renamed to "data", resulting in a layout such as:
data/file1.txt
data/file2.txt
data/dir1/file3.txt
For single file resources, the file is now moved into a "data" folder, like so:
data/file.txt
This is slightly unconventional, but is appropriate within the context of QDN to keep everything consistent.
A website must contain one of the following files in its root directory to be considered valid:
index.html
index.htm
default.html
default.htm
home.html
home.htm
This is the first page that is loaded when loading a Qortal-hosted website.
This would happen if a name fills their limit, and then additional names are followed. Alternatively it could happen if the total storage capacity reduces due to disk space being used by other apps. Chunks are deleted at random to reduce the chance of the same chunk being deleted everywhere. Data loss is possible here for transactions that don't have many peers. We'll have to see in practice how much of a problem this is, but it's better than the scenario where one content creator consumes all space on their followers' nodes, leaving no space for other names that are subsequently followed.
This is calculated by the total capacity divided by the number of names the node follows. The idea here is that a single content creator can't upload terabytes of data and consume all the space on their followers' nodes. They can only use a proportion, with equal space given to each followed name. And since the limit is dynamic, following more names reduces the allocation to existing names.
Chunk hashes are now stored off chain in a metadata file. The metadata file's hash is then included in the transaction.
The main benefits of this approach are:
1. We no longer need to limit the total file size, because adding more chunks doesn't increase the transaction size.
2. This increases the chain capacity by a huge amount - a 512MB file would have previously increased the transaction size by 16kB, whereas it now requires only an additional 32 bytes.
3. We no longer need to use variable difficulty; every transaction is the same size and so the difficulty can be constant no matter how large the files are.
4. Additional metadata (such as title, description, and tags) can ultimately be stored in the metadata file, as apposed to using a separate transaction & resource.
5. There is also scope for adding hashes of individual files into the metadata file, if we ever wanted to allow single files to be requested without having to download and build the entire resource. Although this is unlikely to be available in the short term.
The only real negative is that we now how to fetch the metadata file before we know anything about the chunks for a transaction. This seems to be quite a small trade off by comparison.
Since we're not live yet, there is no backwards support for on-chain hashes, so a new data testchain will be required. This hasn't been tested outside of unit tests yet, so there will likely be several fixes needed before it is stable.
Files are now keyed by signature, in the format:
data/si/gn/signature/hash
For times when there is no signature available (i.e. at the time of initial upload), files are keyed by hash, in the format:
data/_misc/ha/sh/hash
Files in the _misc folder are subsequently relocated to a path that is keyed by the resulting signature.
The end result is that chunks are now grouped on the filesystem by signature. This allows more transparency as to what is being hosted, and will also help simplify the reporting and management of local files.
publicDataEnabled - whether to store decryptable data (default true)
privateDataEnabled - whether to store data without a decryption key (default false)
Each service supports basic validation params, plus has the option for an entirely custom validation function.
Initial validation settings:
- IMAGE must be less than 10MiB
- THUMBNAIL must be less than 500KiB
- METADATA must be less than 10KiB and must contain JSON keys "title", "description", and "tags"
When using POST /arbitrary/{service}/{name}... it will now automatically decide which method to use (PUT/PATCH) based on a few factors:
- If there are already 10 or more layers, use PUT to reset back to a single layer
- If the next layer's patch is more than 20% of the total resource file size, use PUT
- If the next layer modifies more than 50% of the total file count, use PUT
- Otherwise, use PATCH
The PUT method causes a new base layer to be created and all previous update history for that resource becomes obsolete. The PATCH method adds a small delta layer on top of the existing layer(s).
The idea is to wipe the slate clean with a new base layer once the patches start to get demanding for the network to apply. Nodes which view the content will ultimately have build timeouts to prevent someone from deploying a resource with hundreds of complex layers for example, so this approach is there to maximize the chances of the resource being buildable.
The constants above (10 layers, 20% total size, 50% file count) will most likely need tweaking once we have some real world data.
This process could potentially be simplified if we were to modify the structure of the actual zipped data (on the writer side), but this approach is more of a "catch-all" (on the reader side) to support multiple different zip structures, giving us more flexibility. We can still choose to modify the written zip structure if we choose to, which would then cause most of this new code to be skipped.
Note: the filename of a single file is not currently retained; it is renamed to "data" as part of the packaging process. Need to decide if this is okay before we go live.
Until now we have been limited to one data resource per name/service combination. This meant that each name could only have a single website, git repo, image, video, etc, and adding another would overwrite the previous data. The identifier property now allows an optional string to be supplied with each resource, therefore allowing an unlimited amount of resources per name/service combination.
Some examples of what this will allow us to do:
- Create a video library app which holds multiple videos per name
- Same as above but for photos
- Store multiple images against each name, such as an avatar, website thumbnails, video thumbnails, etc. This will be necessary for many "system level" features.
- Attach multiple websites to each name. The default website (with blank/null identifier) would remain the entry point, but other websites could be hosted essentially as subdomains, and then linked from the default site. This also provides a means to go beyond the 500MB website size limit.
Not all of these features will exist initially, but having this identifier included in the protocol layer allows them to be added at any time.
This is generated whenever a data resource cannot be built because it is missing data for at least one layer. Using a custom exception type here enables a few new features:
1. A single build process is now able to request missing data from all the layers that need it. Previously it would only request from the first missing layer and would then give up. This resulted in the user/application having to issue the build command multiple times rather than just once, until all layers had been requested.
2. GET /arbitrary/{service}/{name} will now block the response and retry in the background until the data arrives. This allows it to be used synchronously. Note: we'll need to add a timeout.
3. Loading a website via GET /site/{name} will avoid adding to the failed builds queue when a MissingDataException is thrown, which allows it to be quickly retried. The interface already auto refreshes, allowing the site to load as soon as it's available.
This maps ARBITRARY transactions to peer addresses, but also includes additional metadata/stats to track the success rate and reachability.
Once a node receives files for a transaction, it broadcasts this info to its peers so they can update their records.
TLDR: this allows us to locate peers that are hosting a copy of the file we need.
This ensures that only the owner of a name is able to update data associated with that name.
Note that this doesn't take into account the ability for group members to update a resource, so this will need modifying when that feature is ultimately introduced (likely after v3.0)