We don't want the network being spammed when a file isn't available by any reachable peers. This feature ensures retries are spaced out over longer timeframes. Basic logic:
- Wait 5 minutes in between failed attempts
- After 5 failed attempts (i.e. 25 mins) only try once per day from then on
- A core restart resets the counters
The stats gathered here can also be used to inform the core of when it should attempt a direct connection with a peer to obtain the data. That part isn't implemented yet.
This allows for custom list creation without the need for creating API endpoints to go along with it. This should save time now that we are using lists more.
- "APP" will allow for user-created apps and the Qortal app store
- "METADATA" will be used to supply info about apps/websites/resources, such as title, description, tags, etc
When using POST /arbitrary/{service}/{name}... it will now automatically decide which method to use (PUT/PATCH) based on a few factors:
- If there are already 10 or more layers, use PUT to reset back to a single layer
- If the next layer's patch is more than 20% of the total resource file size, use PUT
- If the next layer modifies more than 50% of the total file count, use PUT
- Otherwise, use PATCH
The PUT method causes a new base layer to be created and all previous update history for that resource becomes obsolete. The PATCH method adds a small delta layer on top of the existing layer(s).
The idea is to wipe the slate clean with a new base layer once the patches start to get demanding for the network to apply. Nodes which view the content will ultimately have build timeouts to prevent someone from deploying a resource with hundreds of complex layers for example, so this approach is there to maximize the chances of the resource being buildable.
The constants above (10 layers, 20% total size, 50% file count) will most likely need tweaking once we have some real world data.
We may choose to save on CPU by not compressing individual files, so this allows the network to support that. However it is still using compression by default, to reduce file sizes.
This process could potentially be simplified if we were to modify the structure of the actual zipped data (on the writer side), but this approach is more of a "catch-all" (on the reader side) to support multiple different zip structures, giving us more flexibility. We can still choose to modify the written zip structure if we choose to, which would then cause most of this new code to be skipped.
Note: the filename of a single file is not currently retained; it is renamed to "data" as part of the packaging process. Need to decide if this is okay before we go live.
Thumbnails will be used in order to show logos/screenshots in the list of websites or other resources. Playlists will allow for media apps to group videos/audio/images into collections, e.g. albums.
Until now we have been limited to one data resource per name/service combination. This meant that each name could only have a single website, git repo, image, video, etc, and adding another would overwrite the previous data. The identifier property now allows an optional string to be supplied with each resource, therefore allowing an unlimited amount of resources per name/service combination.
Some examples of what this will allow us to do:
- Create a video library app which holds multiple videos per name
- Same as above but for photos
- Store multiple images against each name, such as an avatar, website thumbnails, video thumbnails, etc. This will be necessary for many "system level" features.
- Attach multiple websites to each name. The default website (with blank/null identifier) would remain the entry point, but other websites could be hosted essentially as subdomains, and then linked from the default site. This also provides a means to go beyond the 500MB website size limit.
Not all of these features will exist initially, but having this identifier included in the protocol layer allows them to be added at any time.
This is generated whenever a data resource cannot be built because it is missing data for at least one layer. Using a custom exception type here enables a few new features:
1. A single build process is now able to request missing data from all the layers that need it. Previously it would only request from the first missing layer and would then give up. This resulted in the user/application having to issue the build command multiple times rather than just once, until all layers had been requested.
2. GET /arbitrary/{service}/{name} will now block the response and retry in the background until the data arrives. This allows it to be used synchronously. Note: we'll need to add a timeout.
3. Loading a website via GET /site/{name} will avoid adding to the failed builds queue when a MissingDataException is thrown, which allows it to be quickly retried. The interface already auto refreshes, allowing the site to load as soon as it's available.
We may need to temporarily hold files for the purpose of viewing, but restrictions need to be in place to avoid these being served to peers of stored for longer than they are needed.
- If storage policy includes "FOLLOWING", only process transactions relating to the followed names.
- If storage policy is "ALL", process all transactions.
- If storage policy is "NONE" or "VIEWED", don't process or prefetch any data.
This is will be used to coordinate all build processes and threads. This way it keeps it separate from the ArbitraryDataManager class, which was getting a bit cluttered.
This causes the build to fail on the first pass due to missing chunks, however it now fails with a message indicating that it should be retried shortly. The website loader is already set up in such a way that it will be automatically retried, during which time the loading screen is shown.
Also added code to remove the resource from the "failed builds list" once the chunks arrive, so that it is able to be rebuilt sooner than the FAILURE_TIMEOUT (currently 5 minutes).
- Don't attempt to fetch data for transactions which fall outside of the storage policy
- Delete files relating to transactions that are no longer within the scope of the storage policy
Note: some additional work needs to be done to ensure that viewed files are deleted when using a storage policy that excludes "VIEWED" content.
This means that no additional structural code is required to add new lists. The only non-generic aspect are the API endpoints - it's best to keep these specific until we have a need for user-created lists.
This means that no additional structural code is required to add new lists. The only non-generic aspect are the API endpoints - it's best to keep these specific until we have a need for user-created lists.
There's no real need to maintain support for signature mapping anymore. Using this new method means that the latest version of a site is always served via the traditional domain name, whereas using transaction signatures caused older versions to be shown.
Example settings.json configuration:
"domainMapServiceEnabled": true,
"domainMapServicePort": 80,
"domainMap": [
{
"domain": "webdemo.qortal.uk",
"name": "QortalDemo"
},
{
"domain": "www.reqorder.org",
"name": "ReQorder"
}
]
This maps ARBITRARY transactions to peer addresses, but also includes additional metadata/stats to track the success rate and reachability.
Once a node receives files for a transaction, it broadcasts this info to its peers so they can update their records.
TLDR: this allows us to locate peers that are hosting a copy of the file we need.
This ensures that only the owner of a name is able to update data associated with that name.
Note that this doesn't take into account the ability for group members to update a resource, so this will need modifying when that feature is ultimately introduced (likely after v3.0)
Note that this is unlikely to be the cause of some of the zero timestamps issue seen on a subset of nodes - there is still likely to be another problem that needs fixing.
We may not need to validate this at all now that we have the ability to validate the current layer, but I'll leave it as it could be useful for debugging. It is disabled by default so not an issue.
- The "diff type" is now specified per file, allowing for different diff methods in each modified file.
- Patches will only be created when both the before and after files are less than 100kiB in size.
- Patches are validated after creation, and if invalid it will fall back to including the entire file.
This has identified a bug where patching fails for files without trailing newline characters, which still needs to be fixed. Until then, it will fall back to including the entire file in these cases.
This limits the amount of additional space needed to the size of the compressed bootstrap (currently just under 4GB for full nodes, or 200MB for top-only nodes).