Additional params:
- timestamp: to allow for hard forks. Default: the current time
- level: the account level, to allow for the future possibility of different fees per level. Not currently used.
This is to hopefully improve network stability whilst a more advanced solution is being worked on. It also allows us to collect some data on how well the network behaves when there are less block candidates. It should have no effect on minting rewards (other than any side effects as a result of improved network stability).
This allows the GET /arbitrary/{service}/{name} and GET /{service}/{name}/{identifier} endpoints to operate without any authentication. Useful for those who are running public QDN nodes and need to serve data over http(s).
Increased GetOnlineAccountsMessage.MAX_ACCOUNT_COUNT from 1000 to 5000.
The V2 versions are more efficiently encoded and also cache the payload bytes
which reduces CPU when sending to multiple peers.
Serialization / deserialization unit tests included.
Tentative V2 message activation set at core version 3.1.2
see Controller.ONLINE_ACCOUNTS_V2_PEER_VERSION
Note that this is not always accurate - it relates to the largest transaction size for this name, not necessarily the latest or the combined size of multiple transactions. This can be made accurate as soon as we have a "Resources" table to store this info. Trying to do it before then will be too inefficient in terms of queries.
A longer term solution to this hypothetical problem is to store relays in RAM or a temp folder only. Or maybe add an indicator file to instruct the cleanup manager to delete it. But this will require more development. 10 deletion attempts (each 1 second apart) should be enough for now.
This encourages shorter relays, since longer ones will take more time to respond, and also prevents a peer from intentionally taking a long time to respond so that it overwrites an existing entry.
Longer term we could consider keeping track of all respondents for each hash, if there are still issues with data retrieval. I suspect this won't be needed though, as the requesting peer has 16+ different peers connected, and therefore potentially 16 different mappings already.
Should fix issue with v4 transactions where these aren't used. Matches with the NOT NULL DEFAULT 0 which automatically transitions existing v4 ARBITRARY transactions to use the same defaults.
This allows node operators to return their authentication to the legacy rules (local requests allowed), without introducing javascript vulnerabilities. The websites, apps, etc are just prevented from loading, to avoid the risk of any API calls from javascript.
The modifications made to these methods were causing issues with other transaction types that were expecting blank strings instead of null. To keep risk to a minimum, I have split into two different sets of functions until there is more time to unify them.
1) Each relay request expires after 5 seconds, after which nodes will stop relaying it, preventing any kind of infinite loop. So it has to reach the destination peer within 5 seconds. This should be fine, because the original peer's request would timeout anyway, so there's nothing to be gained by continuing to relay it.
2) Each relay request stops being forwarded after 3 "hops" - i.e. once it has been relayed through 3 different peers, it will no longer be transmitted any further. If we assume that each node has 16 connections, that allows it to reach a theoretical maximum of 4096 peers in 3 hops. In practice it will be less, and may not reach everyone due to peer "islands". But it will automatically retry a few times on a timer, so should hopefully find what it needs eventually. Plus, it still has the ability to make a direct connection to anyone hosting the data, as long as they are port forwarded.
This is likely longer than needed, but it's best to allow extra for now and then optimize the timeouts once we've had some experience with real world data.
This involves modifying the log4j2.properties file on node startup to fix an incompatibility with ${dirname:-}. Thanks to AlphaX Projects for tracking down this incompatibility.
This allows an entire registered name to be preauthorized, therefore allowing for instance a website to automatically request other resources from the same author, such as videos.
This delegates the task to the browser rather than doing it in java. It should also catch a few remaining types of links that we had missed - e.g. ones that originate from within js files.
This avoids duplicate entries from the same host/ip with differing ports. This can occur due to some requests using ephemeral port numbers. Ideally we would filter these out altogether, but this at least acts as a safety net to prevent a very cluttered db and associated "broadcast storm". The main tradeoff here is that multiple nodes on the same IP address will be recorded as a single entry. This doesn't seem like it will be a major limitation, because one of them will remain available.
Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
This allows for consistent messaging about each status to be shown in different parts of the system. Previously these strings were hardcoded in the loading screen html so were inaccessible elsewhere.
Logs can be reinstated by adding these lines to log4j2.properties:
logger.arbitrary.name = org.qortal.arbitrary
logger.arbitrary.level = debug
logger.arbitrarycontroller.name = org.qortal.controller.arbitrary
logger.arbitrarycontroller.level = debug
This is the start of a refactor to use ArbitraryDataResource objects rather than passing around separate resourceId, service and identifier, or other duplicate objects. Most of this will need to be done after the initial release due to time constraints.
The simplest solution was to only include a newline at the end of the patch file if the source file ended with a newline. This is used to inform the merge code as to whether to add the newline to the end of the resulting file. Without this, the checksums do not match (and therefore previously the complete file would have been included as a result).
Several parts of the code request resources to be loaded/built, and these separate threads were tripping over each other and causing build failures. This has been avoided by making sure the resource isn't already building before requesting it.
Requesting a resource with the identifier "default" now maps to a blank string. This allows the /arbitrary/{service}/{name}/{identifier} endpoints to be used for default resources too, as they previously didn't support a blank string as the third parameter.
Originally I had hoped to do this by using an intermediate iframe, to keep the message handlers separate from the user content, but this created CORS issues of its own. This approach is far simpler.
Now that we require API key authentication - and therefore security is greatly improved - many users will want to bypass the whitelist in order for the UI to communicate with their remote node. This gives an easy way to do this, without having to override the default whitelist. This boolean can now optionally be added to the default settings.json that is published with new releases, without removing the code's ability to update default whitelist values.
This should fix issue where it would take up to 30 seconds to return for a recent block, and would consume masses of CPU due to having to base58 encode the online accounts signatures. Base58 is very slow and made this API endpoint almost unusable for recent blocks, due to them having untrimmed online accounts signatures.
The procedure outlined in commit f4b06fb is now incorrect. Updated procedure:
- A node can opt into relay mode via the "relayModeEnabled":true setting.
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves.
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded. Currently, only the first response is forwarded, but we may later decide to forward all responses.
- As well as forwarding, the relay peer keeps track of the peers that report to be holding hashes (these mappings are held for 30 seconds).
- The originally requesting peer can then make a request to the relay peer for the data file(s).
- The relay peer uses the mapping to forward the request on to another peer, and then forwards the response (i.e. the data file) back to the peer that originally requested the file.
It's best that the source peer's address isn't exposed to the requesting peer. The relay peer can keep track of this mapping itself.
The only real issue with this approach is that we can't use data from ArbitraryDataFileListMessage to update our ArbitraryPeers data, because we can't distinguish between relay peers and hosting peers. But this isn't something we currently do anyway, as we have the ARBITRARY_SIGNATURES message type to take care of updating ArbitraryPeers mappings.
Setting this to false prevents new connections being made to peers that report to have the data that is needed. This is likely only useful for testing, as disabling it in production would reduce the success rate of data retrieval.
This reuses most of the code already in place in the core related to forwarding.
- A node can opt into relay mode via the "relayModeEnabled": true setting
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded
- The original peer can then make a request for the data file(s) themselves using a similar approach, specifying the IP address of the ultimate peer so that the relay node knows who to ask. This part is not implemented yet.
This makes them extremely generic, improves filenames, and makes it easier to create custom lists. It doesn't have backwards support, but the lists feature isn't working properly in core 2.1+ anyway.
It turns out that when you call SLEEP_UNTIL_MESSAGE, the AT resumes from that very same line on the next execution. The original code incorrectly assumed that it would execute from the restart position (SET_PCS).
So sleeping can be thought of as pausing one execution half way through, rather than ending it.
This caused a bug, because once the AT receives a transaction it wakes up and resumes from the SLEEP_UNTIL_MESSAGE line, which is after the refund check. Even when it loops back around again it lands on labelRedeemTxnLoop = codeByteBuffer.position(); which is again after the refund check.
For now, the simplest fix is to only sleep when listed. We could have alternatively moved the SLEEP_UNTIL_MESSAGE above GET_BLOCK_TIMESTAMP, but this would still require users to send a random transaction to the AT to trigger the refund. Given that the ATs are only "alive" for 30 minutes once the trade begins, it's simpler to just execute every block and therefore allow the refunds to happen automatically.
Also modified the directory structure of single file resources to make them consistent with multi file resources.
For multi file resources, the original folder is renamed to "data", resulting in a layout such as:
data/file1.txt
data/file2.txt
data/dir1/file3.txt
For single file resources, the file is now moved into a "data" folder, like so:
data/file.txt
This is slightly unconventional, but is appropriate within the context of QDN to keep everything consistent.
This adds support for "unconfirmable" data uploads, which will be useful for Q-Chat. It also handles cases where a transaction is orphaned and then subsequently becomes invalid.
A website must contain one of the following files in its root directory to be considered valid:
index.html
index.htm
default.html
default.htm
home.html
home.htm
This is the first page that is loaded when loading a Qortal-hosted website.
This would happen if a name fills their limit, and then additional names are followed. Alternatively it could happen if the total storage capacity reduces due to disk space being used by other apps. Chunks are deleted at random to reduce the chance of the same chunk being deleted everywhere. Data loss is possible here for transactions that don't have many peers. We'll have to see in practice how much of a problem this is, but it's better than the scenario where one content creator consumes all space on their followers' nodes, leaving no space for other names that are subsequently followed.