The modifications made to these methods were causing issues with other transaction types that were expecting blank strings instead of null. To keep risk to a minimum, I have split into two different sets of functions until there is more time to unify them.
1) Each relay request expires after 5 seconds, after which nodes will stop relaying it, preventing any kind of infinite loop. So it has to reach the destination peer within 5 seconds. This should be fine, because the original peer's request would timeout anyway, so there's nothing to be gained by continuing to relay it.
2) Each relay request stops being forwarded after 3 "hops" - i.e. once it has been relayed through 3 different peers, it will no longer be transmitted any further. If we assume that each node has 16 connections, that allows it to reach a theoretical maximum of 4096 peers in 3 hops. In practice it will be less, and may not reach everyone due to peer "islands". But it will automatically retry a few times on a timer, so should hopefully find what it needs eventually. Plus, it still has the ability to make a direct connection to anyone hosting the data, as long as they are port forwarded.
This is likely longer than needed, but it's best to allow extra for now and then optimize the timeouts once we've had some experience with real world data.
This involves modifying the log4j2.properties file on node startup to fix an incompatibility with ${dirname:-}. Thanks to AlphaX Projects for tracking down this incompatibility.
This allows an entire registered name to be preauthorized, therefore allowing for instance a website to automatically request other resources from the same author, such as videos.
This delegates the task to the browser rather than doing it in java. It should also catch a few remaining types of links that we had missed - e.g. ones that originate from within js files.
This avoids duplicate entries from the same host/ip with differing ports. This can occur due to some requests using ephemeral port numbers. Ideally we would filter these out altogether, but this at least acts as a safety net to prevent a very cluttered db and associated "broadcast storm". The main tradeoff here is that multiple nodes on the same IP address will be recorded as a single entry. This doesn't seem like it will be a major limitation, because one of them will remain available.
Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
This allows for consistent messaging about each status to be shown in different parts of the system. Previously these strings were hardcoded in the loading screen html so were inaccessible elsewhere.
Logs can be reinstated by adding these lines to log4j2.properties:
logger.arbitrary.name = org.qortal.arbitrary
logger.arbitrary.level = debug
logger.arbitrarycontroller.name = org.qortal.controller.arbitrary
logger.arbitrarycontroller.level = debug