- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.
There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.
Also added unit tests for both scenarios (same and different creator).
TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
This ensures that nodes are storing unreadable files, outside of the context of Qortal. For public data, the decryption keys themselves are on-chain, included in the "secret" field of arbitrary transactions. When we introduce the concept of private data, we can simply exclude the secret key from the transaction so that only the owner can decrypt it.
When encrypting the file, I have added the 16 byte initialization vector as a prefix to the cyphertext, and it is then automatically extracted back out when decrypting. This gives us the option to encrypt more than one file with the same key, if we ever need it. Right now, we are using a unique key per file, so it's not actually needed, but it's good to have support.
Adds "name", "method", "secret", and "compression" properties. These are the foundations needed in order to handle updates, encryption, and name registration. Compression has been added so that we have the option of switching to different algorithms whilst maintaining support for existing transactions.
This introduces the hash58 property, which stores the base58 hash of the file passed in at initialization. It leaves digest() and digest58() for when we need to compute a new hash from the file itself.
- Adds support for files up 500MiB per transaction (at 2MiB chunk sizes). Previously, the max data size was 4000 bytes.
- Adds a nonce, giving us the option to remove the transaction fees altogether on the data chain.
These features become enabled in version 5 of arbitrary transactions.
A couple of classes were using the bitcoinj alternative, which is twice as slow. This mostly affected the API on port 12392, as byte arrays were automatically encoded as base58 strings via the Base58TypeAdapter / JAXB package-info.
These are simpler than the level 1+2 tests; they only test that the rewards are correct for each level post-shareBinFix. I don't think we need multiple instances of the pre-shareBinFix or block orphaning tests. There are a few subtle differences between each test, such as the online status of Bob, in order the make the tests slightly more comprehensive.
1. Assign 3 minters (one founder, one level 1, one level 2)
2. Mint a block after the shareBinFix, ensuring that level 1 and 2 are being rewarded evenly from the same share bin.
3. Orphan the block and ensure the rewards are reversed.
4. Orphan two more blocks, each time checking that the balances are being reduced in accordance with the pre-shareBinFix mapping.
Post trigger, account levels will map correctly to share bins, subtracting 1 to account for the 0th element of the shareBinsByLevel array.
Pre-trigger, the legacy mapping will remain in effect.
Post trigger, this change will use all 128 bytes of previous block's signature when
calculating/validating next block's "minter" signature (itself the first 64 bytes of a block signature).
Prior to trigger, current behaviour is to only use first 64 bytes of previous block's
signature, which doesn't encompass transactions signature.
New block sig code should help reduce forking and help improve transactional
security.
Added "newBlockSigHeight" to blockchain.json but initially set to block 999999
pending decision on when to merge, auto-update, go-live, etc.
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
This table was previously defined using the TEMPORARY keyword as the rows were used as a cached/index to speed up AT trimming SQL statements.
However, the table definition was lacking the "ON COMMIT PRESERVE ROWS" clause, and possibly the "GLOBAL" keyword, which caused the table contents to be emptied, immediately after being filled, when <tt>repository.saveChanges()</tt> was called (i.e. "COMMIT").
This caused latest AT states to be trimmed in error.
AtRepositoryTests.testGetLatestATStatePostTrimming also fixed so that it fails with the previous, broken code.
Add caching of transactions fetched via ElectrumX to reduce network load and speed up API response.
Fix handling ElectrumX servers that don't want to supply verbose transaction JSON.
Hide lots of data in BitcoinyTransaction that isn't needed by current API users.