Commit Graph

1733 Commits

Author SHA1 Message Date
CalDescent
ba06225b01 Merge branch 'master' into block-archive 2021-09-12 10:17:11 +01:00
CalDescent
ce60ab8e00 Updated naming unit tests
- Use the "{\"age\":30}" data to make the tests more similar to some real world data.
- Added tests to ensure that registering and orphaning works as expected.
2021-09-12 10:16:07 +01:00
CalDescent
14f6fd19ef Added unit tests for trimming, pruning, and archiving. 2021-09-12 10:13:52 +01:00
CalDescent
1d8351f921 Added importFromArchive() feature
This allows archived blocks to be imported back into HSQLDB in order to make them SQL-compatible again.
2021-09-12 10:10:25 +01:00
CalDescent
6a55b052f5 Fixed some bugs found in unit testing. 2021-09-12 09:57:12 +01:00
CalDescent
2a36b83dea Removed BLOCK_LIMIT_REACHED result from the block archive writer.
This wasn't needed, and is now instead caught by the NOT_ENOUGH_BLOCKS result.
2021-09-12 09:55:49 +01:00
CalDescent
14acc4feb9 Removed maxDuplicatedBlocksWhenArchiving setting as it's no longer needed. 2021-09-12 09:52:28 +01:00
CalDescent
0657ca2969 atStatesMaxLifetime increased to 5 days
For now, we need some headroom to allow for orphaning in the event of a problem. Orphaning currently fails if there is no ATStatesData available (which is the case for trimmed blocks). This could ultimately be solved by retaining older unique states.
2021-09-09 17:46:19 +01:00
CalDescent
e90c3a78d1 Updated default "data" field text in the API documentation, to match the value the UI uses. 2021-09-09 15:12:28 +01:00
CalDescent
63c9bc5c1c Revert "Workaround for block 535658 problem"
This reverts commit 278201e87c.
2021-09-09 12:55:21 +01:00
CalDescent
a6bbc81962 Revert "Merge pull request #58 from QuickMythril/536140-fix"
This reverts commit 6d1f7b36a7, reversing
changes made to 6b74ef77e6.

# Conflicts:
#	src/main/java/org/qortal/block/Block536140.java
2021-09-09 12:55:08 +01:00
CalDescent
b800fb5846 Treat a REGISTER_NAME transaction as an UPDATE_NAME if the creator matches.
Whilst not ideal, this is necessary to prevent the chain from getting stuck on future blocks due to duplicate name registrations. See Block535658.java for full details on this problem - this is simply a "catch-all" implementation of that class in order to futureproof this fix.

There is still a database inconsistency to be solved, as some nodes are failing to add a registered name to their Names table the first time around, but this will take some time. Once fixed, this commit could potentially be reverted.

Also added unit tests for both scenarios (same and different creator).

TLDR: this allows all past and future invalid blocks caused by NAME_ALREADY_REGISTERED (by the same creator) to now be valid.
2021-09-09 12:54:01 +01:00
CalDescent
172a629da3 Added comments 2021-09-05 23:32:11 +01:00
CalDescent
6d1f7b36a7
Merge pull request #58 from QuickMythril/536140-fix
Block 536140 fix (same situation as block 535658)
2021-09-05 23:16:08 +01:00
673ee4aeed
Update Block.java 2021-09-05 18:07:11 -04:00
25b787f6f2
Add files via upload 2021-09-05 18:06:32 -04:00
CalDescent
6b74ef77e6 Increased log level of invalid transaction message. 2021-09-05 21:25:38 +01:00
CalDescent
278201e87c Workaround for block 535658 problem 2021-09-05 21:24:02 +01:00
CalDescent
703cdfe174 Added block archive mode
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.

As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.

HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.

Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.

The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.

A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.

There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
2021-09-04 19:40:51 +01:00
CalDescent
02988989ad Reduced online account signatures min and max lifetimes
onlineAccountSignaturesMinLifetime reduced from 720 hours to 12 hours
onlineAccountSignaturesMaxLifetime reduced from 888 hours to 24 hours

These were using up too much space in the database and so it makes sense to trim them more aggressively (assuming testing goes well). We will now stop validating online account signatures after 12 hours, which should be more than enough confirmations, and we will discard them after 24 hours.

Note: this will create some complexity once some of the network is running this code. It could cause out-of-sync nodes on old versions to start treating blocks as invalid from updated peers. It's likely not worth the complexity of a hard fork though, given that almost all nodes will be synced to the chain tip and will therefore be unaffected. And even with a hard fork, we'd still face this problem on out of date nodes.
2021-09-03 10:11:02 +01:00
CalDescent
25c17d3704 atStatesMaxLifetime reduced from 14 days to 24 hours 2021-09-03 10:04:04 +01:00
CalDescent
9b4d832d17 Default minPeerVersion set to 0.1.0. TODO: revert this if ever merged into the main repo. 2021-09-01 09:11:50 +01:00
CalDescent
52ab19dec6 Added method and name to the /site/upload endpoint params. 2021-09-01 09:11:03 +01:00
CalDescent
9973fe4326 Synchronized LatestATStates, to make rebuildLatestAtStates() thread safe. 2021-08-28 11:00:49 +01:00
CalDescent
2479f2d65d Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-08-27 09:45:56 +01:00
CalDescent
9056cb7026 Increased atStatesPruneBatchSize from 10 to 25. 2021-08-27 09:45:56 +01:00
CalDescent
cd9d9b31ef Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
ff841c28e3 Updated tests to use the renamed method. 2021-08-27 09:45:56 +01:00
CalDescent
ca1379d9f8 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-08-27 09:45:56 +01:00
CalDescent
5127f94423 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-08-27 09:45:56 +01:00
CalDescent
f5910ab950 Break out of the AT pruning inner loops if we're stopping the app. 2021-08-27 09:45:56 +01:00
CalDescent
22efaccd4a Fixed NPE introduced in earlier commit. 2021-08-27 09:45:56 +01:00
CalDescent
c8466a2e7a Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-08-27 09:45:56 +01:00
CalDescent
209a9fa8c3 Rework of Blockchain.validate() to account for pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
bc1af12655 Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-08-27 09:45:55 +01:00
CalDescent
e7e4cb7579 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-08-27 09:45:55 +01:00
CalDescent
1b39db664c Added missing ATStatesHeightIndex to the reshape code.
This was accidentally missed out of the original code. Some pre-updated nodes on the network will be missing this index, but we can use the upcoming "auto-bootstrap" feature to get those back.
2021-08-27 08:54:46 +01:00
CalDescent
7397b9fa87 Added more detail to exception message. 2021-08-21 08:35:12 +01:00
CalDescent
5bed5fb8fd Removed unnecessary code in ArbitraryResource.uploadFileAtPath()
This is now handled by ArbitraryDataWriter instead
2021-08-21 08:34:46 +01:00
CalDescent
fd795b4361 Don't attempt to cleanup the filesystem if a build is in progress.
This isn't essential but it helps to reduce unnecessary load and processing which would be better spent on building.
2021-08-20 20:19:51 +01:00
CalDescent
b2c0915a71 Removed accidentally duplicated code
This was causing two instances of the build manager to run.
2021-08-20 20:17:02 +01:00
CalDescent
095083bcfb Use lowercase directory names for consistency 2021-08-20 19:54:09 +01:00
CalDescent
4ba72f7eeb Regularly clean up old and unused files/folders in the temp directory
Also added code to purge built resource caches, but it is currently disabled. This will become more useful once we implement local storage limits.
2021-08-20 19:27:42 +01:00
CalDescent
6cb39795a9 Removed requirement to have connected peers in order to cleanup directories. 2021-08-20 17:00:53 +01:00
CalDescent
00ba16f536 Fixed incorrect ArbitraryTransactionTransformer layout. 2021-08-20 16:59:05 +01:00
CalDescent
988a839623 Improved response value of ArbitraryDataFile.deleteAllChunks() as it was inaccurate. 2021-08-20 16:58:43 +01:00
CalDescent
8fa61e628c Delete files related to transactions that have a more recent PUT
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
2021-08-20 16:56:49 +01:00
CalDescent
8f3620e07b Fixed bug introduced in commit 51b1256 2021-08-20 13:44:30 +01:00
CalDescent
190f70f332 Removed unused, buggy code in HSQLDBArbitraryRepository.save()
It's safer to throw an exception and point the user towards ArbitraryDataWriter, rather than maintaining unused code.
2021-08-20 13:01:55 +01:00
CalDescent
6730683919 Added arbitrary data cleanup manager
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.

It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
2021-08-20 12:55:42 +01:00