Until now we have been limited to one data resource per name/service combination. This meant that each name could only have a single website, git repo, image, video, etc, and adding another would overwrite the previous data. The identifier property now allows an optional string to be supplied with each resource, therefore allowing an unlimited amount of resources per name/service combination.
Some examples of what this will allow us to do:
- Create a video library app which holds multiple videos per name
- Same as above but for photos
- Store multiple images against each name, such as an avatar, website thumbnails, video thumbnails, etc. This will be necessary for many "system level" features.
- Attach multiple websites to each name. The default website (with blank/null identifier) would remain the entry point, but other websites could be hosted essentially as subdomains, and then linked from the default site. This also provides a means to go beyond the 500MB website size limit.
Not all of these features will exist initially, but having this identifier included in the protocol layer allows them to be added at any time.
This is generated whenever a data resource cannot be built because it is missing data for at least one layer. Using a custom exception type here enables a few new features:
1. A single build process is now able to request missing data from all the layers that need it. Previously it would only request from the first missing layer and would then give up. This resulted in the user/application having to issue the build command multiple times rather than just once, until all layers had been requested.
2. GET /arbitrary/{service}/{name} will now block the response and retry in the background until the data arrives. This allows it to be used synchronously. Note: we'll need to add a timeout.
3. Loading a website via GET /site/{name} will avoid adding to the failed builds queue when a MissingDataException is thrown, which allows it to be quickly retried. The interface already auto refreshes, allowing the site to load as soon as it's available.
This maps ARBITRARY transactions to peer addresses, but also includes additional metadata/stats to track the success rate and reachability.
Once a node receives files for a transaction, it broadcasts this info to its peers so they can update their records.
TLDR: this allows us to locate peers that are hosting a copy of the file we need.
This ensures that only the owner of a name is able to update data associated with that name.
Note that this doesn't take into account the ability for group members to update a resource, so this will need modifying when that feature is ultimately introduced (likely after v3.0)
- The "diff type" is now specified per file, allowing for different diff methods in each modified file.
- Patches will only be created when both the before and after files are less than 100kiB in size.
- Patches are validated after creation, and if invalid it will fall back to including the entire file.
This has identified a bug where patching fails for files without trailing newline characters, which still needs to be fixed. Until then, it will fall back to including the entire file in these cases.
It will now attempt to wait until there are no other active transactions before starting, to avoid deadlocks. A timeout for this process is specified - generally 60 seconds - so that callers can give up or retry if something is holding a transaction open for too long. Right now we will give up in all places except for bootstrap creation, where it will keep retrying until successful.
This involved adding a feature to the test suite in include the option of using a repository located on disk rather than in memory. Also moved the bootstrap compression/extraction working directories to temporary folders.
- Adds support for minting accounts as well as trade bot states
- Includes automatic import of both types on node startup, and automatic export on node shutdown
- Retains legacy trade bot states in a separate "TradeBotStatesArchive.json" file, whilst keeping the current active ones in "TradeBotStates.json". This prevents states being re-imported after they have been removed, but still keeps a copy of the data in case a key is ever needed.
- Uses indentation in the JSON files for easier readability.
Problem:
The "Names" table (the latest state of each name) drifts out of sync with the name-related transaction history on a subset of nodes for some unknown and seemingly difficult to find reason.
Solution:
Treat the "Names" table as a cache that can be rebuilt at any time. It now works like this:
- On node startup, rebuild the entire Names table by replaying the transaction history of all registered names. Includes registrations, updates, buys and sells.
- Add a "pre-process" stage to block/transaction processing. If the block contains a name related transaction, rebuild the Names cache for any names referenced by these transactions before validating anything.
The existing "integrity check" has been modified to just check basic attributes based on the latest transaction for a name. It will log if there are any inconsistencies found, but won't correct anything. This adds confidence that the rebuild has worked correctly.
There are also multiple unit tests to ensure that the rebuilds are coping with various different scenarios.
- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This ensures that nodes are storing unreadable files, outside of the context of Qortal. For public data, the decryption keys themselves are on-chain, included in the "secret" field of arbitrary transactions. When we introduce the concept of private data, we can simply exclude the secret key from the transaction so that only the owner can decrypt it.
When encrypting the file, I have added the 16 byte initialization vector as a prefix to the cyphertext, and it is then automatically extracted back out when decrypting. This gives us the option to encrypt more than one file with the same key, if we ever need it. Right now, we are using a unique key per file, so it's not actually needed, but it's good to have support.
Adds "name", "method", "secret", and "compression" properties. These are the foundations needed in order to handle updates, encryption, and name registration. Compression has been added so that we have the option of switching to different algorithms whilst maintaining support for existing transactions.
This introduces the hash58 property, which stores the base58 hash of the file passed in at initialization. It leaves digest() and digest58() for when we need to compute a new hash from the file itself.
- Adds support for files up 500MiB per transaction (at 2MiB chunk sizes). Previously, the max data size was 4000 bytes.
- Adds a nonce, giving us the option to remove the transaction fees altogether on the data chain.
These features become enabled in version 5 of arbitrary transactions.
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
These are simpler than the level 1+2 tests; they only test that the rewards are correct for each level post-shareBinFix. I don't think we need multiple instances of the pre-shareBinFix or block orphaning tests. There are a few subtle differences between each test, such as the online status of Bob, in order the make the tests slightly more comprehensive.
1. Assign 3 minters (one founder, one level 1, one level 2)
2. Mint a block after the shareBinFix, ensuring that level 1 and 2 are being rewarded evenly from the same share bin.
3. Orphan the block and ensure the rewards are reversed.
4. Orphan two more blocks, each time checking that the balances are being reduced in accordance with the pre-shareBinFix mapping.
Post trigger, account levels will map correctly to share bins, subtracting 1 to account for the 0th element of the shareBinsByLevel array.
Pre-trigger, the legacy mapping will remain in effect.
Post trigger, this change will use all 128 bytes of previous block's signature when
calculating/validating next block's "minter" signature (itself the first 64 bytes of a block signature).
Prior to trigger, current behaviour is to only use first 64 bytes of previous block's
signature, which doesn't encompass transactions signature.
New block sig code should help reduce forking and help improve transactional
security.
Added "newBlockSigHeight" to blockchain.json but initially set to block 999999
pending decision on when to merge, auto-update, go-live, etc.
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
Add caching of transactions fetched via ElectrumX to reduce network load and speed up API response.
Fix handling ElectrumX servers that don't want to supply verbose transaction JSON.
Hide lots of data in BitcoinyTransaction that isn't needed by current API users.