It's now capable of syncing chunks as well as complete files. This isn't production ready as it currently requests/receives the same file from multiple peers at once, which slows down the sync and wastes lots of bandwidth. Ideally we would find an appropriate peer first and then sync the file from them.
This introduces the hash58 property, which stores the base58 hash of the file passed in at initialization. It leaves digest() and digest58() for when we need to compute a new hash from the file itself.
Until now it wasn't possible to set up a chain with zero transaction fees due to a hardcoded zero check in Payment.isValid(), and a divide by zero error in Transaction.hasMinimumFeePerByte()
- Adds support for files up 500MiB per transaction (at 2MiB chunk sizes). Previously, the max data size was 4000 bytes.
- Adds a nonce, giving us the option to remove the transaction fees altogether on the data chain.
These features become enabled in version 5 of arbitrary transactions.
This is probably our number one reliability issue at the moment, and has been a problem for a very long time.
The existing CHECKPOINT_LOCK would prevent new connections being created when we are checkpointing or about to checkpoint. However, in many cases we obtain the db connection early on and then don't perform any queries until later. An example would be in synchronization, where the connection is obtained at the start of the process and then retained throughout the sync round. My suspicion is that we were encountering this series of events:
1. Open connection to database
2. Call maybeCheckpoint() and confirm there are no active transactions
3. An existing connection starts a new transaction
4. Checkpointing is performed, but deadlocks due to the in-progress transaction
This potential fix includes preparedStatement.execute() in the CHECKPOINT_LOCK, to block any new transactions being started when we are locked for checkpointing. It is fairly high risk so we need to build some confidence in this before releasing it.
This is probably the most efficient way to process the data on the fly, but it's still not very scalable. A better approach would be to pre-process the HTML when building the file structure, and then serve them completely statically (i.e. using a standard webserver rather than via application memory). But it makes sense to keep it this way for development and maybe early beta testing.
Rename to zh_SC for better distinguish between zh_SC (Simple Chinese)and zh_TC(Traditional Chinese)
Rephrase some of the words for better understanding.
This can be used to preview a site before signing a transaction and announcing it to the network. The response will need reworking to return JSON (along with most of the other new APIs)