Compare commits

...

162 Commits

Author SHA1 Message Date
CalDescent
651372cd64 Bump version to 2.0.0-beta.7 2021-10-12 18:56:58 +01:00
CalDescent
581fe17b58 Added message to check the internet connection if the download cannot start. 2021-10-12 08:08:48 +01:00
CalDescent
af8608f302 Show full stack trace when bootstrapping fails for any reason. 2021-10-12 08:08:05 +01:00
CalDescent
290a19b6c6 Log the URL when downloading a bootstrap, to help with problem solving. 2021-10-12 08:01:47 +01:00
CalDescent
73eaa93be8 Added missing space in log entry. 2021-10-11 23:00:59 +01:00
CalDescent
7ab17383a6 Fix for NPE when serialized block bytes are unavailable. 2021-10-10 13:38:10 +01:00
CalDescent
b103c5b13f Bump version to 2.0.0-beta.6 2021-10-09 17:46:20 +01:00
CalDescent
b7d8a83017 Log "Downloading bootstrap..." as well as showing it in the splash screen. 2021-10-09 17:46:08 +01:00
CalDescent
e7bf4f455d Added missing repository.saveChanges() when reimporting data after creating a bootstrap. 2021-10-09 16:57:53 +01:00
CalDescent
a7f212c4f2 Create a .sha256 file to accompany each bootstrap
This can ultimately be validated after download, and can also be used to help coordinate updates on the various bootstrap hosts.
2021-10-09 16:57:19 +01:00
CalDescent
eb991c6026 Fixed issue causing bootstrap validation to be ignored before creation. 2021-10-09 16:29:40 +01:00
CalDescent
a78af8f248 Added SHA-256 file digest utility methods.
These read the file in small chunks, to reduce memory.
2021-10-09 16:22:21 +01:00
CalDescent
f34bdf0f58 Fixed issue causing minting accounts to be lost in some cases when auto bootstrapping. 2021-10-09 14:31:13 +01:00
CalDescent
ba272253a5 Bump version to 2.0.0-beta.5 2021-10-09 13:03:58 +01:00
CalDescent
9f488b7b77 Sleep for 5s before cleaning up temp path, in case this improves reliability on Windows. 2021-10-09 13:03:32 +01:00
CalDescent
3fb7df18a0 Delete temp directories at the beginning of the bootstrap process too, as Windows doesn't like deleting it at the end of the process. 2021-10-09 13:02:47 +01:00
CalDescent
00401080e0 Simplified cleanup process. Individual deletions aren't needed as they are all inside the main temp directory. 2021-10-09 13:02:00 +01:00
CalDescent
b265dc3bfb Don't log the complete stack trace for exceptions generated by bootstrap.checkRepositoryState(). The error message is enough in these cases. 2021-10-09 11:47:49 +01:00
CalDescent
63cabbe960 Log the full exception details and stack trace when creating bootstraps. 2021-10-09 11:39:08 +01:00
CalDescent
f6c1a7e6db Disregard exceptions in the bootstrap creation cleanup process because these don't affect the created bootstrap - instead just log the exception and full stack trace. 2021-10-09 11:38:13 +01:00
CalDescent
a3dcacade9 Now showing errors directly in the POST /bootstrap/create API response.
This avoids needing to check the log file each time.
2021-10-09 11:02:21 +01:00
CalDescent
17e65e422c Bump version to 2.0.0-beta.4 2021-10-08 19:11:25 +01:00
CalDescent
f53e2ffa47 Add initial peers on node startup if we don't have any in the repository.
This will be needed for future bootstraps, which don't contain any peers. It is also useful for those who have used the DELETE /peers/known API.
2021-10-08 19:10:02 +01:00
CalDescent
a1e4047695 Rework of bootstrap finalization process. 2021-10-08 18:06:41 +01:00
CalDescent
47ce884bbe Delete all known peers when creating a bootstrap 2021-10-08 15:24:10 +01:00
CalDescent
1b17c2613d Show "full node" or "top-only" in the "Downloading bootstrap" message. 2021-10-08 13:12:47 +01:00
CalDescent
dedc8d89c7 Handle case when attempting to load a block from the archive by reference, but the referenced block is in the main block repository, not the archive. This is the case with the genesis block.
Should fix issue where no block summaries were returned when syncing from block 1
2021-10-08 12:51:02 +01:00
CalDescent
d00fce86d2 Treat the genesis block as unpruned, as we leave this in the HSQLDB repository. 2021-10-08 12:42:23 +01:00
CalDescent
abab2d1cde Fixed issue preventing blocks from being served from the archive.
Now prefixing the byte buffer with the block height to mimic a cached block message.
2021-10-08 12:22:21 +01:00
CalDescent
33b715eb4e Merge branch 'networking' into v2.0-beta
# Conflicts:
#	src/main/java/org/qortal/settings/Settings.java
2021-10-07 18:53:49 +01:00
CalDescent
f6effbb6bb Removed unnecessary repository parameter from PruneManager.isBlockPruned() 2021-10-07 18:51:52 +01:00
CalDescent
dff9ec0704 Don't attempt to cache blocks from the archive, as they will never be recent 2021-10-07 18:50:59 +01:00
CalDescent
bfaf4c58e4 Make sure to check the archive when serving block summaries and signatures 2021-10-07 18:50:25 +01:00
CalDescent
ab7d24b637 Updated status wording 2021-10-07 09:02:28 +01:00
CalDescent
c256dae736 Ensure that the temp directory is always in the parent directory of the db folder. 2021-10-07 09:02:13 +01:00
CalDescent
5a55ef64c4 Bump version to 2.0.0-beta.3 2021-10-06 19:51:33 +01:00
CalDescent
045026431b Create a cleaner base directory path, without the "/./" 2021-10-06 19:50:32 +01:00
CalDescent
4dff91a0e5 Initial bootstrap import retry interval reduced from 5 minutes to 1 minute 2021-10-06 19:45:18 +01:00
CalDescent
7105872a37 Improved exception message 2021-10-06 19:44:30 +01:00
CalDescent
179bd8e018 Moved repository reopen to the finally {} block, so that we're never left without a repository instance. Should fix occasional "No repository available" error seen when retrying. 2021-10-06 19:44:04 +01:00
CalDescent
c82293342f Show full exception stack trace when a bootstrap import fails 2021-10-06 19:32:49 +01:00
CalDescent
81bf79e9d3 Bump version to 2.0.0-beta.2 2021-10-06 18:23:51 +01:00
CalDescent
8d6dffb3ff Added test for bootstrap random host selection. 2021-10-06 18:23:17 +01:00
CalDescent
2f6a8f793b Invert the colours in the splash screen 2021-10-06 18:22:52 +01:00
CalDescent
9bcd0bbfac Reduce log spam 2021-10-06 18:22:38 +01:00
CalDescent
cd359de7eb Scheduled maintenance now enabled by default, but uses a min and a max, to reduce the chances of multiple nodes running maintenance at the same time. Default to min: 7 days, max: 30 days. 2021-10-06 18:22:31 +01:00
CalDescent
b7e9af100a Added scheduled repository maintenance feature. Currently disabled by default. 2021-10-06 08:52:27 +01:00
CalDescent
0d6409098f Added another bootstrap host 2021-10-05 22:08:18 +01:00
CalDescent
e07238ded8 Fixed variable name 2021-10-04 22:52:47 +01:00
CalDescent
27903f278d Add tmp folder to gitignore 2021-10-04 22:45:05 +01:00
CalDescent
ddf966d08c Show progress status when extracting files 2021-10-04 22:44:51 +01:00
CalDescent
65dca36ae1 Show progress status when downloading a bootstrap 2021-10-04 22:38:58 +01:00
CalDescent
289dae0780 Fixed issue causing the local repository data backup to be overwritten with an empty list. 2021-10-04 09:28:16 +01:00
CalDescent
71f802ef35 Exponentially backoff when bootstrapping fails, to reduce bandwidth
The retry interval starts at 5 minutes and doubles with each failure.
2021-10-04 09:25:23 +01:00
CalDescent
0135f25b9d Delete existing repository before extracting bootstrap
This limits the amount of additional space needed to the size of the compressed bootstrap (currently just under 4GB for full nodes, or 200MB for top-only nodes).
2021-10-04 09:15:54 +01:00
CalDescent
de3ebf664f Fixed issue with format specifier 2021-10-04 09:11:11 +01:00
CalDescent
850d879726 Use a "tmp" folder in the Qortal directory rather than a system generated temp folder.
This avoids the need to move files between partitions, and we also can't assume that the system partition has enough space to do the extraction.
2021-10-04 09:10:56 +01:00
CalDescent
5397e6c723 Bump version to 2.0.0-beta.1 2021-10-03 22:59:11 +01:00
CalDescent
889f6fc5fc Add a "testnet-" prefix in filenames when creating or importing bootstraps on testnet, so that the two databases can be kept separate. 2021-10-03 22:57:38 +01:00
CalDescent
41c2ed7c67 Fixed out of memory errors when copying AT states. 2021-10-03 22:51:15 +01:00
CalDescent
cdf47d4719 Reduce log spam. 2021-10-03 22:33:36 +01:00
CalDescent
210368bea0 Bump version to 2.0.0-beta.0 2021-10-03 19:43:28 +01:00
CalDescent
4f48751d0b Fixed issue caused when trying to update the splash frame status in a headless environment. 2021-10-03 19:43:10 +01:00
CalDescent
b6d3e82304 Update status when performing repository maintenance 2021-10-03 19:31:05 +01:00
CalDescent
3bb3528aa5 Merge branch 'master' into bootstrap 2021-10-03 18:44:13 +01:00
CalDescent
4f892835b8 Show maximum time estimations in archiving and pruning statuses 2021-10-03 18:41:47 +01:00
CalDescent
ac49221639 Show warning status on startup if the database is missing the AtStatesHeightIndex. 2021-10-03 18:34:21 +01:00
CalDescent
75ed5db3e4 Test multiple files when bulk archiving. 2021-10-03 17:33:02 +01:00
CalDescent
59c8e4e6a2 Fixed bug in earlier commit 2021-10-03 16:09:52 +01:00
CalDescent
52b322b756 Take a backup of local data before overwriting with a bootstrap.
Also moved the import phase to after the validation phase, so that the data returns after the bootstrap.
2021-10-03 16:09:40 +01:00
CalDescent
dc876d9c96 Force a bootstrap if the block archive isn't intact on launch
This allows the topOnly setting to be disabled and the node will automatically bootstrap to the archive version. A rebuild isn't attempted if bootstrapping is disabled, in order to reduce risk.
2021-10-03 16:00:11 +01:00
CalDescent
5b028428c4 Checkpoint immediately after starting/upgrading the repository
This should fix a longstanding issue where quitting the core before the first checkpoint (1-2 hours after first launch) causes the database to become corrupt.
2021-10-03 15:47:10 +01:00
CalDescent
f67a0469fc SplashFrame styling improvements 2021-10-03 15:33:53 +01:00
CalDescent
494cd0efff Added white background to splash frame - I think it looks nicer this way, and it may solve the X2Go issues too. 2021-10-03 15:27:19 +01:00
CalDescent
fc8e38e862 Show version number in the "Starting Qortal Core" status. 2021-10-03 15:22:21 +01:00
CalDescent
f09fb5a209 Run the bulk block archiver any time that it isn't reported to be up to date. This should allow it to re-run if the user quits the core before it completes. 2021-10-03 15:17:31 +01:00
CalDescent
b00c1c1575 Update splash frame statuses when reshaping, pruning, or building the block archive 2021-10-03 15:16:36 +01:00
CalDescent
7e5dd62a92 Show bootstrap statuses on splash frame. 2021-10-03 15:01:50 +01:00
CalDescent
35718f6215 Fixed issue with bootstrap retries. 2021-10-03 15:00:40 +01:00
CalDescent
a6d3891a95 Fixed bugs when importing bootstrap. 2021-10-03 09:48:14 +01:00
CalDescent
9591c4eb58 Added support for creating top-only bootstraps 2021-10-02 19:13:18 +01:00
CalDescent
8aaf720b0b Force archiveEnabled to false if we're in top only mode.
Most of the code handles this case anyway, but it's an easy place for bugs to be created. So it's safest to enforce it at the settings level.
2021-10-02 19:11:47 +01:00
CalDescent
63a35c97bc Fixed bug in path returned to POST /bootstrap/create API. 2021-10-02 19:10:23 +01:00
CalDescent
8eddaa3fac Small refactor to update wording. 2021-10-02 15:48:31 +01:00
CalDescent
1b3f37eb78 Delete the "archive" folder when in top-only mode
This allows a block archive node to be switched into top-only mode.
2021-10-02 15:37:06 +01:00
CalDescent
1f8fbfaa24 Missed a few topOnly references from the last commit. 2021-10-02 15:34:55 +01:00
CalDescent
ea92ccb4c1 "pruningEnabled" setting renamed to "topOnly"
Pruning is still a concept in the code, but since it relates to both archived and topOnly mode, it makes it cleaner to use "topOnly" to refer to the pruned db with no archive.
2021-10-02 15:13:23 +01:00
CalDescent
d25a77b633 Ignore bootstraps and other 7z archives in git. 2021-10-02 15:04:00 +01:00
CalDescent
51bb776e56 Select a random host when importing a bootstrap, and started adding support for multiple bootstrap types. 2021-10-02 15:03:31 +01:00
CalDescent
47b1b6daba Retry the entire bootstrap import process on failure, rather than just the download. 2021-10-02 14:43:26 +01:00
CalDescent
adeb654248 Rework of repository maintenance and backups
It will now attempt to wait until there are no other active transactions before starting, to avoid deadlocks. A timeout for this process is specified - generally 60 seconds - so that callers can give up or retry if something is holding a transaction open for too long. Right now we will give up in all places except for bootstrap creation, where it will keep retrying until successful.
2021-10-02 13:23:26 +01:00
CalDescent
c4d7335fdd Fixed more issues that could cause transactions to be held open. 2021-10-02 13:05:16 +01:00
CalDescent
ca7f42c409 Reduced unnecessary database queries in the block archiver. 2021-10-02 11:52:20 +01:00
CalDescent
ca02cd72ae Fixed issue in block archiver, which caused it to hold a transaction open for a very long time. This caused deadlocks when trying to create bootstraps or perform repository maintenance. 2021-10-02 11:51:53 +01:00
CalDescent
1ba542eb50 Simplified minting code in block archive tests. 2021-10-01 14:51:45 +01:00
CalDescent
53cd967541 Added defrag (repository maintenance) tests. 2021-10-01 14:51:28 +01:00
CalDescent
49749a0bc7 Added more precise checking of database states to the bulk pruning test. This highlighted a major bug in the bulk prune process whereby the recent AT states weren't being retained. 2021-10-01 11:03:56 +01:00
CalDescent
446f924380 Added bulk pruning test, which highlighted some bugs in both bulk and regular pruning. 2021-10-01 09:32:35 +01:00
CalDescent
5b231170cd Fixed small issues with block archive tests. 2021-10-01 08:10:28 +01:00
CalDescent
7375357b11 Added bootstrap tests
This involved adding a feature to the test suite in include the option of using a repository located on disk rather than in memory. Also moved the bootstrap compression/extraction working directories to temporary folders.
2021-10-01 07:44:33 +01:00
CalDescent
347d799d85 Reduce log spam. 2021-09-28 20:30:06 +01:00
CalDescent
0d17f02191 Pass a repository instance into the bulk archiving and pruning methods.
This is a better approach than opening a new session for each, and it makes it easier to write unit tests.
2021-09-28 20:29:53 +01:00
CalDescent
ce5bc80347 Increased threshold of BlockArchiveWriter.isArchiverUpToDate() from 90 to 95%.
In practice, the reading from a correctly archived chain with 550k blocks is currently around 99.5%, but it will be lower if starting with a chain that isn't fully synced.
2021-09-28 20:21:19 +01:00
CalDescent
0a4479fe9e Initial implementation of automatic bootstrapping
Currently supports block archive (i.e. full) bootstraps only. Still need to add support for top-only bootstraps.
2021-09-28 20:17:19 +01:00
CalDescent
de8e96cd75 Added Blockchain.validateAllBlocks() to check every block back to genesis.
This is extremely slow and shouldn't be needed in normal use cases. It currently checks that each block references the one before, but can ultimately be expanded to check more information about each block and its derived data.
2021-09-28 09:28:47 +01:00
CalDescent
e2a62f88a6 Modified repository backup and recovery to allow a custom filename to be specified. 2021-09-28 09:26:18 +01:00
CalDescent
8926d2a73c Rework of import/export process.
- Adds support for minting accounts as well as trade bot states
- Includes automatic import of both types on node startup, and automatic export on node shutdown
- Retains legacy trade bot states in a separate "TradeBotStatesArchive.json" file, whilst keeping the current active ones in "TradeBotStates.json". This prevents states being re-imported after they have been removed, but still keeps a copy of the data in case a key is ever needed.
- Uses indentation in the JSON files for easier readability.
2021-09-28 09:24:21 +01:00
CalDescent
114833cf8e Fixed NPE when attempting to lookup a block signature that doesn't exist. 2021-09-27 09:13:19 +01:00
CalDescent
32227436e0 Updated AdvancedInstaller project for v1.7.0 2021-09-27 08:43:44 +01:00
CalDescent
656896d16f Fixed issue causing base block prune/trim heights to not be updated on the final pass. 2021-09-24 21:36:05 +01:00
CalDescent
19bf8afece Fixed bug in pruning phase on node startup
This was causing very recent AT states to be deleted accidentally, because we weren't rebuilding the LatestATStates table before running the query. We should add unit tests to cover this process in case there are any other undiscovered problems.
2021-09-24 20:52:45 +01:00
CalDescent
841b6c4ddf Fixed another issue causing ATStatesHeightIndex to go missing after pruning. 2021-09-24 15:58:09 +01:00
CalDescent
4c171df848 Disable archiving and pruning if the AtStatesHeightIndex is missing, and log it so that the user knows they should bootstrap or resync. 2021-09-24 11:14:18 +01:00
CalDescent
1f79d88840 Fixed errors found in unit tests. 2021-09-24 09:35:36 +01:00
CalDescent
6ee7e9d731 Merge branch 'block-archive' of github.com:Qortal/qortal into block-archive
# Conflicts:
#	src/main/java/org/qortal/controller/Controller.java
2021-09-24 08:50:35 +01:00
CalDescent
4856223838 Fixed error in rebase. 2021-09-24 08:50:00 +01:00
CalDescent
74ea2a847d Added unit tests for trimming, pruning, and archiving. 2021-09-24 08:37:36 +01:00
CalDescent
9813dde3d9 Added importFromArchive() feature
This allows archived blocks to be imported back into HSQLDB in order to make them SQL-compatible again.
2021-09-24 08:37:36 +01:00
CalDescent
fea7b62b9c Fixed some bugs found in unit testing. 2021-09-24 08:37:36 +01:00
CalDescent
37e03bf2bb Removed BLOCK_LIMIT_REACHED result from the block archive writer.
This wasn't needed, and is now instead caught by the NOT_ENOUGH_BLOCKS result.
2021-09-24 08:37:36 +01:00
CalDescent
5656de79a2 Removed maxDuplicatedBlocksWhenArchiving setting as it's no longer needed. 2021-09-24 08:37:36 +01:00
CalDescent
70c6048cc1 Added block archive mode
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.

As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.

HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.

Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.

The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.

A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.

There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
2021-09-24 08:37:36 +01:00
CalDescent
87595fd704 Synchronized LatestATStates, to make rebuildLatestAtStates() thread safe. 2021-09-24 08:37:15 +01:00
CalDescent
dc030a42bb Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-09-24 08:37:15 +01:00
CalDescent
89283ed179 Increased atStatesPruneBatchSize from 10 to 25. 2021-09-24 08:36:52 +01:00
CalDescent
64e8a05a9f Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-09-24 08:36:52 +01:00
CalDescent
676320586a Updated tests to use the renamed method. 2021-09-24 08:36:52 +01:00
CalDescent
734fa51806 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-09-24 08:36:52 +01:00
CalDescent
f056ecc8d8 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-09-24 08:36:52 +01:00
CalDescent
1a722c1517 Break out of the AT pruning inner loops if we're stopping the app. 2021-09-24 08:36:51 +01:00
CalDescent
44607ba6a4 Fixed NPE introduced in earlier commit. 2021-09-24 08:36:51 +01:00
CalDescent
01d66212da Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-09-24 08:36:51 +01:00
CalDescent
925e10b19b Rework of Blockchain.validate() to account for pruning mode. 2021-09-24 08:36:51 +01:00
CalDescent
1b4c75a76e Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-09-24 08:36:51 +01:00
CalDescent
3400e36ac4 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-09-24 08:36:51 +01:00
CalDescent
ba06225b01 Merge branch 'master' into block-archive 2021-09-12 10:17:11 +01:00
CalDescent
14f6fd19ef Added unit tests for trimming, pruning, and archiving. 2021-09-12 10:13:52 +01:00
CalDescent
1d8351f921 Added importFromArchive() feature
This allows archived blocks to be imported back into HSQLDB in order to make them SQL-compatible again.
2021-09-12 10:10:25 +01:00
CalDescent
6a55b052f5 Fixed some bugs found in unit testing. 2021-09-12 09:57:12 +01:00
CalDescent
2a36b83dea Removed BLOCK_LIMIT_REACHED result from the block archive writer.
This wasn't needed, and is now instead caught by the NOT_ENOUGH_BLOCKS result.
2021-09-12 09:55:49 +01:00
CalDescent
14acc4feb9 Removed maxDuplicatedBlocksWhenArchiving setting as it's no longer needed. 2021-09-12 09:52:28 +01:00
CalDescent
0657ca2969 atStatesMaxLifetime increased to 5 days
For now, we need some headroom to allow for orphaning in the event of a problem. Orphaning currently fails if there is no ATStatesData available (which is the case for trimmed blocks). This could ultimately be solved by retaining older unique states.
2021-09-09 17:46:19 +01:00
CalDescent
703cdfe174 Added block archive mode
This takes all trimmed blocks (which should now be all but the last 1450 or so) and moves them into flat files. Each file contains the serialized bytes of as many blocks that can fit within the file size target of 100MiB.

As a result, the HSQLDB size drops to less than 1GB, making it much faster and easier to maintain. It also significantly reduces the total size of each full node, because the data is stored in a highly optimized way.

HSQLDB then works similarly to the way it does in pruning mode - it holds all transactions, the latest state of every AT, as well as the full AT states data and hashes for the past 1450 blocks.

Each archive file contains headers and indexes in order to quickly locate blocks. When a peer requests a block that is within the archive, the serialized bytes are sent directly without the need to go via a BlockData object. Now that there are no slow queries or data serialization processes needed, it should greatly speed up the block serving.

The /block API endpoints have been modified in such a way that they will also check and retrieve blocks from the archive when needed.

A lightweight "BlockArchive" table is needed in HSQLDB to map block heights to signatures minters and timestamps. It made more sense to keep SQL support for these basic attributes of each block. These are located in a separate table from the full blocks, in order to create a clear distinction between HSQLDB blocks and archived blocks, and also to speed up query times in the Blocks table, which is the one we are using 99% of the time.

There is currently a restriction on the /admin/orphan API endpoint to prevent orphaning beyond the threshold of the block archive.
2021-09-04 19:40:51 +01:00
CalDescent
02988989ad Reduced online account signatures min and max lifetimes
onlineAccountSignaturesMinLifetime reduced from 720 hours to 12 hours
onlineAccountSignaturesMaxLifetime reduced from 888 hours to 24 hours

These were using up too much space in the database and so it makes sense to trim them more aggressively (assuming testing goes well). We will now stop validating online account signatures after 12 hours, which should be more than enough confirmations, and we will discard them after 24 hours.

Note: this will create some complexity once some of the network is running this code. It could cause out-of-sync nodes on old versions to start treating blocks as invalid from updated peers. It's likely not worth the complexity of a hard fork though, given that almost all nodes will be synced to the chain tip and will therefore be unaffected. And even with a hard fork, we'd still face this problem on out of date nodes.
2021-09-03 10:11:02 +01:00
CalDescent
25c17d3704 atStatesMaxLifetime reduced from 14 days to 24 hours 2021-09-03 10:04:04 +01:00
CalDescent
9973fe4326 Synchronized LatestATStates, to make rebuildLatestAtStates() thread safe. 2021-08-28 11:00:49 +01:00
CalDescent
2479f2d65d Moved trimming and pruning classes into a single package (org.qortal.controller.repository) 2021-08-27 09:45:56 +01:00
CalDescent
9056cb7026 Increased atStatesPruneBatchSize from 10 to 25. 2021-08-27 09:45:56 +01:00
CalDescent
cd9d9b31ef Prune ATStatesData as well as the ATStates when switching to pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
ff841c28e3 Updated tests to use the renamed method. 2021-08-27 09:45:56 +01:00
CalDescent
ca1379d9f8 Unified the code to build the LatestATStates table, as it's now used by more than one class.
Note - the rebuildLatestAtStates() must never be used by two different classes at the same time, or AT states could be incorrectly deleted. It is okay at the moment as we don't run the AT states trimmer and pruner in the same app session. However we should probably synchronize this method so that we don't accidentally call it from two places in the future.
2021-08-27 09:45:56 +01:00
CalDescent
5127f94423 Added bulk pruning phase on node startup the first time that pruning mode is enabled.
When switching from a full node to a pruning node, we need to delete most of the database contents. If we do this entirely as a background process, it is very slow and can interfere with syncing. However, if we take the approach of transferring only the necessary rows to a new table and then deleting the original table, this makes the process much faster. It was taking several days to delete the AT states in the background, but only a couple of minutes to copy them to a new table.

The trade off is that we have to go through a form of "reshape" when starting the app for the first time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be a problem.

Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to shrink the database file size down to a fraction of what it was before.

From this point, the original background process will run, but can be dialled right down so not to interfere with syncing.
2021-08-27 09:45:56 +01:00
CalDescent
f5910ab950 Break out of the AT pruning inner loops if we're stopping the app. 2021-08-27 09:45:56 +01:00
CalDescent
22efaccd4a Fixed NPE introduced in earlier commit. 2021-08-27 09:45:56 +01:00
CalDescent
c8466a2e7a Updated AT states pruner as it previously relied on blocks being present in the db to make decisions. As a side effect, this now prunes ATs up the the pruneBlockLimit too, rather than keeping the last 35 days or so. Will review this later but I don't think we will need the missing ones. 2021-08-27 09:45:56 +01:00
CalDescent
209a9fa8c3 Rework of Blockchain.validate() to account for pruning mode. 2021-08-27 09:45:56 +01:00
CalDescent
bc1af12655 Prune all blocks up until the blockPruneLimit
By default, this leaves only the last 1450 blocks in the database. Only applies when pruning mode is enabled.
2021-08-27 09:45:55 +01:00
CalDescent
e7e4cb7579 Started work on pruning mode (top-only-sync)
Initially just deleting old and unused AT states, to get this table under control. I have had to delete them individually as the table can't handle complex queries due to its size.

Nodes in pruning mode will be unable to serve older blocks to peers.
2021-08-27 09:45:55 +01:00
CalDescent
c3ff9e49e8 Merge pull request #40 from szisti/fixedNetwork
Support for configuration based fixed network
2021-05-29 09:51:56 +01:00
Istvan Szabo
d52875aa8f Added logs to intentional disconnects 2021-05-28 16:06:27 +01:00
Istvan Szabo
9027cd290c Filter out on demand connections when using fixed network 2021-05-28 14:47:30 +01:00
Istvan Szabo
58a7203ede Support for configuration based fixed network 2021-05-28 14:47:30 +01:00
73 changed files with 6240 additions and 514 deletions

2
.gitignore vendored
View File

@@ -26,3 +26,5 @@
/run.pid
/run.log
/WindowsInstaller/Install Files/qortal.jar
/*.7z
/tmp

View File

@@ -17,10 +17,10 @@
<ROW Property="Manufacturer" Value="Qortal"/>
<ROW Property="MsiLogging" MultiBuildValue="DefaultBuild:vp"/>
<ROW Property="NTP_GOOD" Value="false"/>
<ROW Property="ProductCode" Value="1033:{482E9390-1005-42FD-9F3F-E160E0E6FB19} 1049:{8FE09AC2-814B-42FC-9FCE-53D45A396529} 2052:{4FABD326-8345-438B-82B8-66C2DC3676E6} 2057:{7ECFFF43-DEC7-4B7F-BC88-260A10AF132A} " Type="16"/>
<ROW Property="ProductCode" Value="1033:{7B3C1C09-F01F-4E39-91D5-A8CF83FF5A54} 1049:{F7FB426D-E66F-4C45-ACE2-189D9DD9252F} 2052:{D0A1DC46-0E82-46A1-A882-05E3A66D1198} 2057:{4C08BD8D-ADB4-4E4B-853E-426926C32D6D} " Type="16"/>
<ROW Property="ProductLanguage" Value="2057"/>
<ROW Property="ProductName" Value="Qortal"/>
<ROW Property="ProductVersion" Value="1.6.0" Type="32"/>
<ROW Property="ProductVersion" Value="1.7.0" Type="32"/>
<ROW Property="RECONFIG_NTP" Value="true"/>
<ROW Property="REMOVE_BLOCKCHAIN" Value="YES" Type="4"/>
<ROW Property="REPAIR_BLOCKCHAIN" Value="YES" Type="4"/>
@@ -212,7 +212,7 @@
<ROW Component="ADDITIONAL_LICENSE_INFO_71" ComponentId="{12A3ADBE-BB7A-496C-8869-410681E6232F}" Directory_="jdk.zipfs_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_71" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_8" ComponentId="{D53AD95E-CF96-4999-80FC-5812277A7456}" Directory_="java.naming_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_8" Type="0"/>
<ROW Component="ADDITIONAL_LICENSE_INFO_9" ComponentId="{6B7EA9B0-5D17-47A8-B78C-FACE86D15E01}" Directory_="java.net.http_Dir" Attributes="0" KeyPath="ADDITIONAL_LICENSE_INFO_9" Type="0"/>
<ROW Component="AI_CustomARPName" ComponentId="{7941AD6C-7C09-48E7-93ED-0340E0F52EC0}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_CustomARPName" ComponentId="{45C3C526-BCC9-4E9A-9260-A5EEA10A03C8}" Directory_="APPDIR" Attributes="260" KeyPath="DisplayName" Options="1"/>
<ROW Component="AI_ExePath" ComponentId="{3644948D-AE0B-41BB-9FAF-A79E70490A08}" Directory_="APPDIR" Attributes="260" KeyPath="AI_ExePath"/>
<ROW Component="APPDIR" ComponentId="{680DFDDE-3FB4-47A5-8FF5-934F576C6F91}" Directory_="APPDIR" Attributes="0"/>
<ROW Component="AccessBridgeCallbacks.h" ComponentId="{288055D1-1062-47A3-AA44-5601B4E38AED}" Directory_="bridge_Dir" Attributes="0" KeyPath="AccessBridgeCallbacks.h" Type="0"/>

20
pom.xml
View File

@@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>1.7.0</version>
<version>2.0.0-beta.7</version>
<packaging>jar</packaging>
<properties>
<skipTests>true</skipTests>
@@ -14,6 +14,9 @@
<ciyam-at.version>1.3.8</ciyam-at.version>
<commons-net.version>3.6</commons-net.version>
<commons-text.version>1.8</commons-text.version>
<commons-io.version>2.6</commons-io.version>
<commons-compress.version>1.21</commons-compress.version>
<xz.version>1.9</xz.version>
<dagger.version>1.2.2</dagger.version>
<guava.version>28.1-jre</guava.version>
<hsqldb.version>2.5.1</hsqldb.version>
@@ -449,6 +452,21 @@
<artifactId>commons-text</artifactId>
<version>${commons-text.version}</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>${commons-io.version}</version>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-compress</artifactId>
<version>${commons-compress.version}</version>
</dependency>
<dependency>
<groupId>org.tukaani</groupId>
<artifactId>xz</artifactId>
<version>${xz.version}</version>
</dependency>
<!-- For bitset/bitmap compression -->
<dependency>
<groupId>io.druid</groupId>

View File

@@ -1,6 +1,7 @@
package org.qortal;
import java.security.Security;
import java.util.concurrent.TimeoutException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -57,10 +58,10 @@ public class RepositoryMaintenance {
LOGGER.info("Starting repository periodic maintenance. This can take a while...");
try (final Repository repository = RepositoryManager.getRepository()) {
repository.performPeriodicMaintenance();
repository.performPeriodicMaintenance(null);
LOGGER.info("Repository periodic maintenance completed");
} catch (DataException e) {
} catch (DataException | TimeoutException e) {
LOGGER.error("Repository periodic maintenance failed", e);
}

View File

@@ -16,4 +16,8 @@ public enum ApiExceptionFactory {
return createException(request, apiError, null);
}
public ApiException createCustomException(HttpServletRequest request, ApiError apiError, String message) {
return new ApiException(apiError.getStatus(), apiError.getCode(), message, null);
}
}

View File

@@ -22,6 +22,7 @@ import java.time.OffsetDateTime;
import java.time.ZoneOffset;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.locks.ReentrantLock;
import java.util.stream.Collectors;
@@ -35,6 +36,7 @@ import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.appender.RollingFileAppender;
import org.qortal.account.Account;
@@ -67,6 +69,8 @@ import com.google.common.collect.Lists;
@Tag(name = "Admin")
public class AdminResource {
private static final Logger LOGGER = LogManager.getLogger(AdminResource.class);
private static final int MAX_LOG_LINES = 500;
@Context
@@ -459,6 +463,23 @@ public class AdminResource {
if (targetHeight <= 0 || targetHeight > Controller.getInstance().getChainHeight())
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_HEIGHT);
// Make sure we're not orphaning as far back as the archived blocks
// FUTURE: we could support this by first importing earlier blocks from the archive
if (Settings.getInstance().isTopOnly() ||
Settings.getInstance().isArchiveEnabled()) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Find the first unarchived block
int oldestBlock = repository.getBlockArchiveRepository().getBlockArchiveHeight();
// Add some extra blocks just in case we're currently archiving/pruning
oldestBlock += 100;
if (targetHeight <= oldestBlock) {
LOGGER.info("Unable to orphan beyond block {} because it is archived", oldestBlock);
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_HEIGHT);
}
}
}
if (BlockChain.orphan(targetHeight))
return "true";
else
@@ -589,6 +610,10 @@ public class AdminResource {
repository.saveChanges();
return "true";
} catch (IOException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA, e);
} finally {
blockchainLock.unlock();
}
@@ -644,14 +669,16 @@ public class AdminResource {
blockchainLock.lockInterruptibly();
try {
repository.backup(true);
// Timeout if the database isn't ready for backing up after 60 seconds
long timeout = 60 * 1000L;
repository.backup(true, "backup", timeout);
repository.saveChanges();
return "true";
} finally {
blockchainLock.unlock();
}
} catch (InterruptedException e) {
} catch (InterruptedException | TimeoutException e) {
// We couldn't lock blockchain to perform backup
return "false";
} catch (DataException e) {
@@ -676,13 +703,15 @@ public class AdminResource {
blockchainLock.lockInterruptibly();
try {
repository.performPeriodicMaintenance();
// Timeout if the database isn't ready to start after 60 seconds
long timeout = 60 * 1000L;
repository.performPeriodicMaintenance(timeout);
} finally {
blockchainLock.unlock();
}
} catch (InterruptedException e) {
// No big deal
} catch (DataException e) {
} catch (DataException | TimeoutException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}

View File

@@ -15,6 +15,8 @@ import java.math.BigDecimal;
import java.math.BigInteger;
import java.math.RoundingMode;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import javax.servlet.http.HttpServletRequest;
@@ -33,11 +35,13 @@ import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockMintingInfo;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.controller.Controller;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -81,11 +85,19 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
return blockData;
}
return blockData;
// Not found, so try the block archive
blockData = repository.getBlockArchiveRepository().fromSignature(signature);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -116,16 +128,24 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Check the database first
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
}
Block block = new Block(repository, blockData);
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
bytes.write(BlockTransformer.toBytes(block));
return Base58.encode(bytes.toByteArray());
// Not found, so try the block archive
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, false, repository);
if (bytes != null) {
return Base58.encode(bytes);
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (TransformationException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA, e);
} catch (DataException | IOException e) {
@@ -170,8 +190,12 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
if (repository.getBlockRepository().getHeightFromSignature(signature) == 0)
// Check if the block exists in either the database or archive
if (repository.getBlockRepository().getHeightFromSignature(signature) == 0 &&
repository.getBlockArchiveRepository().getHeightFromSignature(signature) == 0) {
// Not found in either the database or archive
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return repository.getBlockRepository().getTransactionsFromSignature(signature, limit, offset, reverse);
} catch (DataException e) {
@@ -200,7 +224,19 @@ public class BlocksResource {
})
public BlockData getFirstBlock() {
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getBlockRepository().fromHeight(1);
// Check the database first
BlockData blockData = repository.getBlockRepository().fromHeight(1);
if (blockData != null) {
return blockData;
}
// Try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(1);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -262,17 +298,28 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData childBlockData = null;
// Check if block exists in database
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
return repository.getBlockRepository().fromReference(signature);
}
// Check block exists
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
BlockData childBlockData = repository.getBlockRepository().fromReference(signature);
// Not found, so try the archive
// This also checks that the parent block exists
// It will return null if either the parent or child don't exit
childBlockData = repository.getBlockArchiveRepository().fromReference(signature);
// Check child block exists
if (childBlockData == null)
if (childBlockData == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
// Check child block's reference matches the supplied signature
if (!Arrays.equals(childBlockData.getReference(), signature)) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return childBlockData;
} catch (DataException e) {
@@ -338,13 +385,20 @@ public class BlocksResource {
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Firstly check the database
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
return blockData.getHeight();
}
// Check block exists
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromSignature(signature);
if (blockData != null) {
return blockData.getHeight();
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
return blockData.getHeight();
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -371,11 +425,20 @@ public class BlocksResource {
})
public BlockData getByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Firstly check the database
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData != null) {
return blockData;
}
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData != null) {
return blockData;
}
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
return blockData;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -402,12 +465,31 @@ public class BlocksResource {
})
public BlockMintingInfo getBlockMintingInfoByHeight(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
// Try the database
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
if (blockData == null) {
// Not found, so try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData == null) {
// Still not found
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
}
Block block = new Block(repository, blockData);
BlockData parentBlockData = repository.getBlockRepository().fromSignature(blockData.getReference());
if (parentBlockData == null) {
// Parent block not found - try the archive
parentBlockData = repository.getBlockArchiveRepository().fromSignature(blockData.getReference());
if (parentBlockData == null) {
// Still not found
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
}
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
if (minterLevel == 0)
// This may be unavailable when requesting a trimmed block
@@ -454,13 +536,26 @@ public class BlocksResource {
})
public BlockData getByTimestamp(@PathParam("timestamp") long timestamp) {
try (final Repository repository = RepositoryManager.getRepository()) {
int height = repository.getBlockRepository().getHeightFromTimestamp(timestamp);
if (height == 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
BlockData blockData = null;
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
// Try the Blocks table
int height = repository.getBlockRepository().getHeightFromTimestamp(timestamp);
if (height > 0) {
// Found match in Blocks table
return repository.getBlockRepository().fromHeight(height);
}
// Not found in Blocks table, so try the archive
height = repository.getBlockArchiveRepository().getHeightFromTimestamp(timestamp);
if (height > 0) {
// Found match in archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
}
// Ensure block exists
if (blockData == null) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
return blockData;
} catch (DataException e) {
@@ -497,9 +592,14 @@ public class BlocksResource {
for (/* count already set */; count > 0; --count, ++height) {
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null)
// Run out of blocks!
break;
if (blockData == null) {
// Not found - try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(height);
if (blockData == null) {
// Run out of blocks!
break;
}
}
blocks.add(blockData);
}
@@ -544,7 +644,29 @@ public class BlocksResource {
if (accountData == null || accountData.getPublicKey() == null)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.PUBLIC_KEY_NOT_FOUND);
return repository.getBlockRepository().getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
List<BlockSummaryData> summaries = repository.getBlockRepository()
.getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
// Add any from the archive
List<BlockSummaryData> archivedSummaries = repository.getBlockArchiveRepository()
.getBlockSummariesBySigner(accountData.getPublicKey(), limit, offset, reverse);
if (archivedSummaries != null && !archivedSummaries.isEmpty()) {
summaries.addAll(archivedSummaries);
}
else {
summaries = archivedSummaries;
}
// Sort the results (because they may have been obtained from two places)
if (reverse != null && reverse) {
summaries.sort((s1, s2) -> Integer.valueOf(s2.getHeight()).compareTo(Integer.valueOf(s1.getHeight())));
}
else {
summaries.sort(Comparator.comparing(s -> Integer.valueOf(s.getHeight())));
}
return summaries;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -580,7 +702,8 @@ public class BlocksResource {
if (!Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
return repository.getBlockRepository().getBlockSigners(addresses, limit, offset, reverse);
// This method pulls data from both Blocks and BlockArchive, so no need to query serparately
return repository.getBlockArchiveRepository().getBlockSigners(addresses, limit, offset, reverse);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
@@ -620,7 +743,76 @@ public class BlocksResource {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getBlockRepository().getBlockSummaries(startHeight, endHeight, count);
/*
* start end count result
* 10 40 null blocks 10 to 39 (excludes end block, ignore count)
*
* null null null blocks 1 to 50 (assume count=50, maybe start=1)
* 30 null null blocks 30 to 79 (assume count=50)
* 30 null 10 blocks 30 to 39
*
* null null 50 last 50 blocks? so if max(blocks.height) is 200, then blocks 151 to 200
* null 200 null blocks 150 to 199 (excludes end block, assume count=50)
* null 200 10 blocks 190 to 199 (excludes end block)
*/
List<BlockSummaryData> blockSummaries = new ArrayList<>();
// Use the latest X blocks if only a count is specified
if (startHeight == null && endHeight == null && count != null) {
BlockData chainTip = repository.getBlockRepository().getLastBlock();
startHeight = chainTip.getHeight() - count;
endHeight = chainTip.getHeight();
}
// ... otherwise default the start height to 1
if (startHeight == null && endHeight == null) {
startHeight = 1;
}
// Default the count to 50
if (count == null) {
count = 50;
}
// If both a start and end height exist, ignore the count
if (startHeight != null && endHeight != null) {
if (startHeight > 0 && endHeight > 0) {
count = Integer.MAX_VALUE;
}
}
// Derive start height from end height if missing
if (startHeight == null || startHeight == 0) {
if (endHeight != null && endHeight > 0) {
if (count != null) {
startHeight = endHeight - count;
}
}
}
for (/* count already set */; count > 0; --count, ++startHeight) {
if (endHeight != null && startHeight >= endHeight) {
break;
}
BlockData blockData = repository.getBlockRepository().fromHeight(startHeight);
if (blockData == null) {
// Not found - try the archive
blockData = repository.getBlockArchiveRepository().fromHeight(startHeight);
if (blockData == null) {
// Run out of blocks!
break;
}
}
if (blockData != null) {
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
blockSummaries.add(blockSummaryData);
}
}
return blockSummaries;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}

View File

@@ -0,0 +1,92 @@
package org.qortal.api.resource;
import io.swagger.v3.oas.annotations.Operation;
import io.swagger.v3.oas.annotations.media.Content;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.ApiError;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.repository.Bootstrap;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import javax.servlet.http.HttpServletRequest;
import javax.ws.rs.*;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.MediaType;
import java.io.IOException;
@Path("/bootstrap")
@Tag(name = "Bootstrap")
public class BootstrapResource {
private static final Logger LOGGER = LogManager.getLogger(BootstrapResource.class);
@Context
HttpServletRequest request;
@POST
@Path("/create")
@Operation(
summary = "Create bootstrap",
description = "Builds a bootstrap file for distribution",
responses = {
@ApiResponse(
description = "path to file on success, an exception on failure",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "string"))
)
}
)
public String createBootstrap() {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
Bootstrap bootstrap = new Bootstrap(repository);
try {
bootstrap.checkRepositoryState();
} catch (DataException e) {
LOGGER.info("Not ready to create bootstrap: {}", e.getMessage());
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.REPOSITORY_ISSUE, e.getMessage());
}
bootstrap.validateBlockchain();
return bootstrap.create();
} catch (DataException | InterruptedException | IOException e) {
LOGGER.info("Unable to create bootstrap", e);
throw ApiExceptionFactory.INSTANCE.createCustomException(request, ApiError.REPOSITORY_ISSUE, e.getMessage());
}
}
@GET
@Path("/validate")
@Operation(
summary = "Validate blockchain",
description = "Useful to check database integrity prior to creating or after installing a bootstrap. " +
"This process is intensive and can take over an hour to run.",
responses = {
@ApiResponse(
description = "true if valid, false if invalid",
content = @Content(mediaType = MediaType.TEXT_PLAIN, schema = @Schema(type = "boolean"))
)
}
)
public boolean validateBootstrap() {
Security.checkApiCallAllowed(request);
try (final Repository repository = RepositoryManager.getRepository()) {
Bootstrap bootstrap = new Bootstrap(repository);
return bootstrap.validateCompleteBlockchain();
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE);
}
}
}

View File

@@ -4,10 +4,7 @@ import java.io.File;
import java.io.FileNotFoundException;
import java.io.InputStream;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.*;
import java.util.concurrent.locks.ReentrantLock;
import javax.xml.bind.JAXBContext;
@@ -27,11 +24,9 @@ import org.eclipse.persistence.jaxb.UnmarshallerProperties;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.network.Network;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.*;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
import org.qortal.utils.StringLongMapXmlAdapter;
/**
@@ -506,29 +501,105 @@ public class BlockChain {
* @throws SQLException
*/
public static void validate() throws DataException {
// Check first block is Genesis Block
if (!isGenesisBlockValid())
rebuildBlockchain();
boolean isTopOnly = Settings.getInstance().isTopOnly();
boolean archiveEnabled = Settings.getInstance().isArchiveEnabled();
boolean canBootstrap = Settings.getInstance().getBootstrap();
boolean needsArchiveRebuild = false;
BlockData chainTip;
try (final Repository repository = RepositoryManager.getRepository()) {
chainTip = repository.getBlockRepository().getLastBlock();
// Ensure archive is (at least partially) intact, and force a bootstrap if it isn't
if (!isTopOnly && archiveEnabled && canBootstrap) {
needsArchiveRebuild = (repository.getBlockArchiveRepository().fromHeight(2) == null);
if (needsArchiveRebuild) {
LOGGER.info("Couldn't retrieve block 2 from archive. Bootstrapping...");
// If there are minting accounts, make sure to back them up
// Don't backup if there are no minting accounts, as this can cause problems
if (!repository.getAccountRepository().getMintingAccounts().isEmpty()) {
Controller.getInstance().exportRepositoryData();
}
}
}
}
boolean hasBlocks = (chainTip != null && chainTip.getHeight() > 1);
if (isTopOnly && hasBlocks) {
// Top-only mode is enabled and we have blocks, so it's possible that the genesis block has been pruned
// It's best not to validate it, and there's no real need to
} else {
// Check first block is Genesis Block
if (!isGenesisBlockValid() || needsArchiveRebuild) {
try {
rebuildBlockchain();
} catch (InterruptedException e) {
throw new DataException(String.format("Interrupted when trying to rebuild blockchain: %s", e.getMessage()));
}
}
}
// We need to create a new connection, as the previous repository and its connections may be been
// closed by rebuildBlockchain() if a bootstrap was applied
try (final Repository repository = RepositoryManager.getRepository()) {
repository.checkConsistency();
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - 1440, 1);
// Set the number of blocks to validate based on the pruned state of the chain
// If pruned, subtract an extra 10 to allow room for error
int blocksToValidate = (isTopOnly || archiveEnabled) ? Settings.getInstance().getPruneBlockLimit() - 10 : 1440;
int startHeight = Math.max(repository.getBlockRepository().getBlockchainHeight() - blocksToValidate, 1);
BlockData detachedBlockData = repository.getBlockRepository().getDetachedBlockSignature(startHeight);
if (detachedBlockData != null) {
LOGGER.error(String.format("Block %d's reference does not match any block's signature", detachedBlockData.getHeight()));
LOGGER.error(String.format("Block %d's reference does not match any block's signature",
detachedBlockData.getHeight()));
LOGGER.error(String.format("Your chain may be invalid and you should consider bootstrapping" +
" or re-syncing from genesis."));
}
}
}
// Wait for blockchain lock (whereas orphan() only tries to get lock)
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lock();
try {
LOGGER.info(String.format("Orphaning back to block %d", detachedBlockData.getHeight() - 1));
orphan(detachedBlockData.getHeight() - 1);
} finally {
blockchainLock.unlock();
/**
* More thorough blockchain validation method. Useful for validating bootstraps.
* A DataException is thrown if anything is invalid.
*
* @throws DataException
*/
public static void validateAllBlocks() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData chainTip = repository.getBlockRepository().getLastBlock();
final int chainTipHeight = chainTip.getHeight();
final int oldestBlock = 1; // TODO: increase if in pruning mode
byte[] lastReference = null;
for (int height = chainTipHeight; height > oldestBlock; height--) {
BlockData blockData = repository.getBlockRepository().fromHeight(height);
if (blockData == null) {
blockData = repository.getBlockArchiveRepository().fromHeight(height);
}
if (blockData == null) {
String error = String.format("Missing block at height %d", height);
LOGGER.error(error);
throw new DataException(error);
}
if (height != chainTipHeight) {
// Check reference
if (!Arrays.equals(blockData.getSignature(), lastReference)) {
String error = String.format("Invalid reference for block at height %d: %s (should be %s)",
height, Base58.encode(blockData.getReference()), Base58.encode(lastReference));
LOGGER.error(error);
throw new DataException(error);
}
}
lastReference = blockData.getReference();
}
}
}
@@ -551,7 +622,15 @@ public class BlockChain {
}
}
private static void rebuildBlockchain() throws DataException {
private static void rebuildBlockchain() throws DataException, InterruptedException {
boolean shouldBootstrap = Settings.getInstance().getBootstrap();
if (shouldBootstrap) {
// Settings indicate that we should apply a bootstrap rather than rebuilding and syncing from genesis
Bootstrap bootstrap = new Bootstrap();
bootstrap.startImport();
return;
}
// (Re)build repository
if (!RepositoryManager.wasPristineAtOpen())
RepositoryManager.rebuild();

View File

@@ -14,6 +14,7 @@ import java.security.NoSuchAlgorithmException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeoutException;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -215,8 +216,17 @@ public class AutoUpdate extends Thread {
}
// Give repository a chance to backup in case things go badly wrong (if enabled)
if (Settings.getInstance().getRepositoryBackupInterval() > 0)
RepositoryManager.backup(true);
if (Settings.getInstance().getRepositoryBackupInterval() > 0) {
try {
// Timeout if the database isn't ready for backing up after 60 seconds
long timeout = 60 * 1000L;
RepositoryManager.backup(true, "backup", timeout);
} catch (TimeoutException e) {
LOGGER.info("Attempt to backup repository failed due to timeout: {}", e.getMessage());
// Continue with the auto update anyway...
}
}
// Call ApplyUpdate to end this process (unlocking current JAR so it can be replaced)
String javaHome = System.getProperty("java.home");

View File

@@ -442,7 +442,8 @@ public class BlockMinter extends Thread {
// Add to blockchain
newBlock.process();
LOGGER.info(String.format("Minted new test block: %d", newBlock.getBlockData().getHeight()));
LOGGER.info(String.format("Minted new test block: %d sig: %.8s",
newBlock.getBlockData().getHeight(), Base58.encode(newBlock.getBlockData().getSignature())));
repository.saveChanges();

View File

@@ -2,8 +2,11 @@ package org.qortal.controller;
import java.awt.TrayIcon.MessageType;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.SecureRandom;
import java.security.Security;
import java.time.LocalDateTime;
@@ -24,7 +27,7 @@ import java.util.Properties;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Function;
@@ -46,6 +49,7 @@ import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.controller.Synchronizer.SynchronizationResult;
import org.qortal.controller.repository.PruneManager;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crypto.Crypto;
@@ -84,21 +88,14 @@ import org.qortal.network.message.OnlineAccountsMessage;
import org.qortal.network.message.SignaturesMessage;
import org.qortal.network.message.TransactionMessage;
import org.qortal.network.message.TransactionSignaturesMessage;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transaction.ArbitraryTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.TransactionType;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.utils.Base58;
import org.qortal.utils.ByteArray;
import org.qortal.utils.DaemonThreadFactory;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import org.qortal.utils.*;
import com.google.common.primitives.Longs;
@@ -158,6 +155,7 @@ public class Controller extends Thread {
};
private long repositoryBackupTimestamp = startTime; // ms
private long repositoryMaintenanceTimestamp = startTime; // ms
private long repositoryCheckpointTimestamp = startTime; // ms
private long ntpCheckTimestamp = startTime; // ms
private long deleteExpiredTimestamp = startTime + DELETE_EXPIRED_INTERVAL; // ms
@@ -315,6 +313,10 @@ public class Controller extends Thread {
return this.buildVersion;
}
public String getVersionStringWithoutPrefix() {
return this.buildVersion.replaceFirst(VERSION_PREFIX, "");
}
/** Returns current blockchain height, or 0 if it's not available. */
public int getChainHeight() {
synchronized (this.latestBlocks) {
@@ -358,7 +360,7 @@ public class Controller extends Thread {
return this.savedArgs;
}
/* package */ static boolean isStopping() {
public static boolean isStopping() {
return isStopping;
}
@@ -416,6 +418,12 @@ public class Controller extends Thread {
try {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(getRepositoryUrl());
RepositoryManager.setRepositoryFactory(repositoryFactory);
RepositoryManager.setRequestedCheckpoint(Boolean.TRUE);
try (final Repository repository = RepositoryManager.getRepository()) {
RepositoryManager.archive(repository);
RepositoryManager.prune(repository);
}
} catch (DataException e) {
// If exception has no cause then repository is in use by some other process.
if (e.getCause() == null) {
@@ -446,6 +454,12 @@ public class Controller extends Thread {
return; // Not System.exit() so that GUI can display error
}
// Import current trade bot states and minting accounts if they exist
Controller.importRepositoryData();
// Add the initial peers to the repository if we don't have any
Controller.installInitialPeers();
LOGGER.info("Starting controller");
Controller.getInstance().start();
@@ -515,10 +529,10 @@ public class Controller extends Thread {
final long repositoryBackupInterval = Settings.getInstance().getRepositoryBackupInterval();
final long repositoryCheckpointInterval = Settings.getInstance().getRepositoryCheckpointInterval();
long repositoryMaintenanceInterval = getRandomRepositoryMaintenanceInterval();
ExecutorService trimExecutor = Executors.newCachedThreadPool(new DaemonThreadFactory());
trimExecutor.execute(new AtStatesTrimmer());
trimExecutor.execute(new OnlineAccountsSignaturesTrimmer());
// Start executor service for trimming or pruning
PruneManager.getInstance().start();
try {
while (!isStopping) {
@@ -576,7 +590,39 @@ public class Controller extends Thread {
Translator.INSTANCE.translate("SysTray", "CREATING_BACKUP_OF_DB_FILES"),
MessageType.INFO);
RepositoryManager.backup(true);
try {
// Timeout if the database isn't ready for backing up after 60 seconds
long timeout = 60 * 1000L;
RepositoryManager.backup(true, "backup", timeout);
} catch (TimeoutException e) {
LOGGER.info("Attempt to backup repository failed due to timeout: {}", e.getMessage());
}
}
// Give repository a chance to perform maintenance (if enabled)
if (repositoryMaintenanceInterval > 0 && now >= repositoryMaintenanceTimestamp + repositoryMaintenanceInterval) {
repositoryMaintenanceTimestamp = now + repositoryMaintenanceInterval;
if (Settings.getInstance().getShowMaintenanceNotification())
SysTray.getInstance().showMessage(Translator.INSTANCE.translate("SysTray", "DB_MAINTENANCE"),
Translator.INSTANCE.translate("SysTray", "PERFORMING_DB_MAINTENANCE"),
MessageType.INFO);
LOGGER.info("Starting scheduled repository maintenance. This can take a while...");
try (final Repository repository = RepositoryManager.getRepository()) {
// Timeout if the database isn't ready for maintenance after 60 seconds
long timeout = 60 * 1000L;
repository.performPeriodicMaintenance(timeout);
LOGGER.info("Scheduled repository maintenance completed");
} catch (DataException | TimeoutException e) {
LOGGER.error("Scheduled repository maintenance failed", e);
}
// Get a new random interval
repositoryMaintenanceInterval = getRandomRepositoryMaintenanceInterval();
}
// Prune stuck/slow/old peers
@@ -603,13 +649,68 @@ public class Controller extends Thread {
Thread.interrupted();
// Fall-through to exit
} finally {
trimExecutor.shutdownNow();
PruneManager.getInstance().stop();
}
}
/**
* Import current trade bot states and minting accounts.
* This is needed because the user may have bootstrapped, or there could be a database inconsistency
* if the core crashed when computing the nonce during the start of the trade process.
*/
private static void importRepositoryData() {
try (final Repository repository = RepositoryManager.getRepository()) {
String exportPath = Settings.getInstance().getExportPath();
try {
Path importPath = Paths.get(exportPath, "TradeBotStates.json");
repository.importDataFromFile(importPath.toString());
} catch (FileNotFoundException e) {
// Do nothing, as the files will only exist in certain cases
}
try {
trimExecutor.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
Path importPath = Paths.get(exportPath, "MintingAccounts.json");
repository.importDataFromFile(importPath.toString());
} catch (FileNotFoundException e) {
// Do nothing, as the files will only exist in certain cases
}
repository.saveChanges();
}
catch (DataException | IOException e) {
LOGGER.info("Unable to import data into repository: {}", e.getMessage());
}
}
private static void installInitialPeers() {
try (final Repository repository = RepositoryManager.getRepository()) {
if (repository.getNetworkRepository().getAllPeers().isEmpty()) {
Network.installInitialPeers(repository);
}
} catch (DataException e) {
// Fail silently as this is an optional step
}
}
private long getRandomRepositoryMaintenanceInterval() {
final long minInterval = Settings.getInstance().getRepositoryMaintenanceMinInterval();
final long maxInterval = Settings.getInstance().getRepositoryMaintenanceMaxInterval();
if (maxInterval == 0) {
return 0;
}
return (new Random().nextLong() % (maxInterval - minInterval)) + minInterval;
}
/**
* Export current trade bot states and minting accounts.
*/
public void exportRepositoryData() {
try (final Repository repository = RepositoryManager.getRepository()) {
repository.exportNodeLocalData();
} catch (DataException e) {
// Fail silently as this is an optional step
}
}
@@ -964,6 +1065,10 @@ public class Controller extends Thread {
}
}
// Export local data
LOGGER.info("Backing up local data");
this.exportRepositoryData();
LOGGER.info("Shutting down networking");
Network.getInstance().shutdown();
@@ -1292,6 +1397,34 @@ public class Controller extends Thread {
try (final Repository repository = RepositoryManager.getRepository()) {
BlockData blockData = repository.getBlockRepository().fromSignature(signature);
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight())) {
// If this is a pruned block, we likely only have partial data, so best not to sent it
blockData = null;
}
}
// If we have no block data, we should check the archive in case it's there
if (blockData == null) {
if (Settings.getInstance().isArchiveEnabled()) {
byte[] bytes = BlockArchiveReader.getInstance().fetchSerializedBlockBytesForSignature(signature, true, repository);
if (bytes != null) {
CachedBlockMessage blockMessage = new CachedBlockMessage(bytes);
blockMessage.setId(message.getId());
// This call also causes the other needed data to be pulled in from repository
if (!peer.sendMessage(blockMessage)) {
peer.disconnect("failed to send block");
// Don't fall-through to caching because failure to send might be from failure to build message
return;
}
// Sent successfully from archive, so nothing more to do
return;
}
}
}
if (blockData == null) {
// We don't have this block
this.stats.getBlockMessageStats.unknownBlocks.getAndIncrement();
@@ -1412,12 +1545,29 @@ public class Controller extends Thread {
int numberRequested = Math.min(Network.MAX_BLOCK_SUMMARIES_PER_REPLY, getBlockSummariesMessage.getNumberRequested());
BlockData blockData = repository.getBlockRepository().fromReference(parentSignature);
if (blockData == null) {
// Try the archive
blockData = repository.getBlockArchiveRepository().fromReference(parentSignature);
}
if (blockData != null) {
if (PruneManager.getInstance().isBlockPruned(blockData.getHeight())) {
// If this request contains a pruned block, we likely only have partial data, so best not to sent anything
// We always prune from the oldest first, so it's fine to just check the first block requested
blockData = null;
}
}
while (blockData != null && blockSummaries.size() < numberRequested) {
BlockSummaryData blockSummary = new BlockSummaryData(blockData);
blockSummaries.add(blockSummary);
blockData = repository.getBlockRepository().fromReference(blockData.getSignature());
byte[] previousSignature = blockData.getSignature();
blockData = repository.getBlockRepository().fromReference(previousSignature);
if (blockData == null) {
// Try the archive
blockData = repository.getBlockArchiveRepository().fromReference(previousSignature);
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending block summaries after %s to peer %s", Base58.encode(parentSignature), peer), e);
@@ -1466,11 +1616,20 @@ public class Controller extends Thread {
try (final Repository repository = RepositoryManager.getRepository()) {
int numberRequested = getSignaturesMessage.getNumberRequested();
BlockData blockData = repository.getBlockRepository().fromReference(parentSignature);
if (blockData == null) {
// Try the archive
blockData = repository.getBlockArchiveRepository().fromReference(parentSignature);
}
while (blockData != null && signatures.size() < numberRequested) {
signatures.add(blockData.getSignature());
blockData = repository.getBlockRepository().fromReference(blockData.getSignature());
byte[] previousSignature = blockData.getSignature();
blockData = repository.getBlockRepository().fromReference(previousSignature);
if (blockData == null) {
// Try the archive
blockData = repository.getBlockArchiveRepository().fromReference(previousSignature);
}
}
} catch (DataException e) {
LOGGER.error(String.format("Repository issue while sending V2 signatures after %s to peer %s", Base58.encode(parentSignature), peer), e);

View File

@@ -0,0 +1,109 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class AtStatesPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(AtStatesPruner.class);
@Override
public void run() {
Thread.currentThread().setName("AT States pruner");
boolean archiveMode = false;
if (!Settings.getInstance().isTopOnly()) {
// Top-only mode isn't enabled, but we might want to prune for the purposes of archiving
if (!Settings.getInstance().isArchiveEnabled()) {
// No pruning or archiving, so we must not prune anything
return;
}
else {
// We're allowed to prune blocks that have already been archived
archiveMode = true;
}
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getATRepository().getAtPruneHeight();
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates();
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getAtStatesPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing())
continue;
// Prune AT states for all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
// In archive mode we are only allowed to trim blocks that have already been archived
if (archiveMode) {
upperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
// TODO: validate that the actual archived data exists before pruning it?
}
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getAtStatesPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight)
continue;
LOGGER.debug(String.format("Pruning AT states between blocks %d and %d...", pruneStartHeight, upperPruneHeight));
int numAtStatesPruned = repository.getATRepository().pruneAtStates(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
int numAtStateDataRowsTrimmed = repository.getATRepository().trimAtStates(
pruneStartHeight, upperPruneHeight, Settings.getInstance().getAtStatesTrimLimit());
repository.saveChanges();
if (numAtStatesPruned > 0 || numAtStateDataRowsTrimmed > 0) {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Pruned %d AT state%s between blocks %d and %d",
numAtStatesPruned, (numAtStatesPruned != 1 ? "s" : ""),
finalPruneStartHeight, upperPruneHeight));
} else {
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
repository.getATRepository().setAtPruneHeight(pruneStartHeight);
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.debug(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
repository.discardChanges();
Thread.sleep(5*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune AT states: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,7 +1,8 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
@@ -20,8 +21,8 @@ public class AtStatesTrimmer implements Runnable {
try (final Repository repository = RepositoryManager.getRepository()) {
int trimStartHeight = repository.getATRepository().getAtTrimHeight();
repository.getATRepository().prepareForAtStateTrimming();
repository.saveChanges();
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates();
while (!Controller.isStopping()) {
repository.discardChanges();
@@ -62,7 +63,7 @@ public class AtStatesTrimmer implements Runnable {
if (upperTrimmableHeight > upperBatchHeight) {
trimStartHeight = upperBatchHeight;
repository.getATRepository().setAtTrimHeight(trimStartHeight);
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
final int finalTrimStartHeight = trimStartHeight;

View File

@@ -0,0 +1,113 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.*;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.utils.NTP;
import java.io.IOException;
public class BlockArchiver implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockArchiver.class);
private static final long INITIAL_SLEEP_PERIOD = 0L; // TODO: 5 * 60 * 1000L + 1234L; // ms
public void run() {
Thread.currentThread().setName("Block archiver");
if (!Settings.getInstance().isArchiveEnabled()) {
return;
}
try (final Repository repository = RepositoryManager.getRepository()) {
// Don't even start building until initial rush has ended
Thread.sleep(INITIAL_SLEEP_PERIOD);
int startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
// Don't attempt to archive if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
if (!hasAtStatesHeightIndex) {
LOGGER.info("Unable to start block archiver due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
repository.discardChanges();
return;
}
LOGGER.info("Starting block archiver...");
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getArchiveInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null) {
continue;
}
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing()) {
continue;
}
// Don't attempt to archive if we're not synced yet
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null || chainTip.getTimestamp() < minLatestBlockTimestamp) {
continue;
}
// Build cache of blocks
try {
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
BlockArchiveWriter writer = new BlockArchiveWriter(startHeight, maximumArchiveHeight, repository);
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
switch (result) {
case OK:
// Increment block archive height
startHeight += writer.getWrittenCount();
repository.getBlockArchiveRepository().setBlockArchiveHeight(startHeight);
repository.saveChanges();
break;
case STOPPING:
return;
// We've reached the limit of the blocks we can archive
// Sleep for a while to allow more to become available
case NOT_ENOUGH_BLOCKS:
// We didn't reach our file size target, so that must mean that we don't have enough blocks
// yet or something went wrong. Sleep for a while and then try again.
repository.discardChanges();
Thread.sleep(60 * 60 * 1000L); // 1 hour
break;
case BLOCK_NOT_FOUND:
// We tried to archive a block that didn't exist. This is a major failure and likely means
// that a bootstrap or re-sync is needed. Try again every minute until then.
LOGGER.info("Error: block not found when building archive. If this error persists, " +
"a bootstrap or re-sync may be needed.");
repository.discardChanges();
Thread.sleep( 60 * 1000L); // 1 minute
break;
}
} catch (IOException | TransformationException e) {
LOGGER.info("Caught exception when creating block cache", e);
}
}
} catch (DataException e) {
LOGGER.info("Caught exception when creating block cache", e);
} catch (InterruptedException e) {
// Do nothing
}
}
}

View File

@@ -0,0 +1,114 @@
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
public class BlockPruner implements Runnable {
private static final Logger LOGGER = LogManager.getLogger(BlockPruner.class);
@Override
public void run() {
Thread.currentThread().setName("Block pruner");
boolean archiveMode = false;
if (!Settings.getInstance().isTopOnly()) {
// Top-only mode isn't enabled, but we might want to prune for the purposes of archiving
if (!Settings.getInstance().isArchiveEnabled()) {
// No pruning or archiving, so we must not prune anything
return;
}
else {
// We're allowed to prune blocks that have already been archived
archiveMode = true;
}
}
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
// Don't attempt to prune if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
if (!hasAtStatesHeightIndex) {
LOGGER.info("Unable to start block pruner due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
return;
}
while (!Controller.isStopping()) {
repository.discardChanges();
Thread.sleep(Settings.getInstance().getBlockPruneInterval());
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null || NTP.getTime() == null)
continue;
// Don't even attempt if we're mid-sync as our repository requests will be delayed for ages
if (Controller.getInstance().isSynchronizing()) {
continue;
}
// Don't attempt to prune if we're not synced yet
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null || chainTip.getTimestamp() < minLatestBlockTimestamp) {
continue;
}
// Prune all blocks up until our latest minus pruneBlockLimit
final int ourLatestHeight = chainTip.getHeight();
int upperPrunableHeight = ourLatestHeight - Settings.getInstance().getPruneBlockLimit();
// In archive mode we are only allowed to trim blocks that have already been archived
if (archiveMode) {
upperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
}
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
if (pruneStartHeight >= upperPruneHeight) {
continue;
}
LOGGER.debug(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
LOGGER.debug(String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
pruneStartHeight, upperPruneHeight));
} else {
final int nextPruneHeight = upperPruneHeight + 1;
repository.getBlockRepository().setBlockPruneHeight(nextPruneHeight);
repository.saveChanges();
LOGGER.debug(String.format("Bumping block base prune height to %d", pruneStartHeight));
// Can we move onto next batch?
if (upperPrunableHeight > nextPruneHeight) {
pruneStartHeight = nextPruneHeight;
}
else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
repository.discardChanges();
Thread.sleep(10*60*1000L);
}
}
}
} catch (DataException e) {
LOGGER.warn(String.format("Repository issue trying to prune blocks: %s", e.getMessage()));
} catch (InterruptedException e) {
// Time to exit
}
}
}

View File

@@ -1,8 +1,9 @@
package org.qortal.controller;
package org.qortal.controller.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.BlockChain;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;

View File

@@ -0,0 +1,160 @@
package org.qortal.controller.repository;
import org.apache.commons.io.FileUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.utils.DaemonThreadFactory;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
public class PruneManager {
private static final Logger LOGGER = LogManager.getLogger(PruneManager.class);
private static PruneManager instance;
private boolean isTopOnly = Settings.getInstance().isTopOnly();
private int pruneBlockLimit = Settings.getInstance().getPruneBlockLimit();
private ExecutorService executorService;
private PruneManager() {
}
public static synchronized PruneManager getInstance() {
if (instance == null)
instance = new PruneManager();
return instance;
}
public void start() {
this.executorService = Executors.newCachedThreadPool(new DaemonThreadFactory());
if (Settings.getInstance().isTopOnly()) {
// Top-only-sync
this.startTopOnlySyncMode();
}
else if (Settings.getInstance().isArchiveEnabled()) {
// Full node with block archive
this.startFullNodeWithBlockArchive();
}
else {
// Full node with full SQL support
this.startFullSQLNode();
}
}
/**
* Top-only-sync
* In this mode, we delete (prune) all blocks except
* a small number of recent ones. There is no need for
* trimming or archiving, because all relevant blocks
* are deleted.
*/
private void startTopOnlySyncMode() {
this.startPruning();
// We don't need the block archive in top-only mode
this.deleteArchive();
}
/**
* Full node with block archive
* In this mode we archive trimmed blocks, and then
* prune archived blocks to keep the database small
*/
private void startFullNodeWithBlockArchive() {
this.startTrimming();
this.startArchiving();
this.startPruning();
}
/**
* Full node with full SQL support
* In this mode we trim the database but don't prune
* or archive any data, because we want to maintain
* full SQL support of old blocks. This mode will not
* be actively maintained but can be used by those who
* need to perform SQL analysis on older blocks.
*/
private void startFullSQLNode() {
this.startTrimming();
}
private void startPruning() {
this.executorService.execute(new AtStatesPruner());
this.executorService.execute(new BlockPruner());
}
private void startTrimming() {
this.executorService.execute(new AtStatesTrimmer());
this.executorService.execute(new OnlineAccountsSignaturesTrimmer());
}
private void startArchiving() {
this.executorService.execute(new BlockArchiver());
}
private void deleteArchive() {
if (!Settings.getInstance().isTopOnly()) {
LOGGER.error("Refusing to delete archive when not in top-only mode");
}
try {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive");
if (archivePath.toFile().exists()) {
LOGGER.info("Deleting block archive because we are in top-only mode...");
FileUtils.deleteDirectory(archivePath.toFile());
}
} catch (IOException e) {
LOGGER.info("Couldn't delete archive: {}", e.getMessage());
}
}
public void stop() {
this.executorService.shutdownNow();
try {
this.executorService.awaitTermination(2L, TimeUnit.SECONDS);
} catch (InterruptedException e) {
// We tried...
}
}
public boolean isBlockPruned(int height) throws DataException {
if (!this.isTopOnly) {
return false;
}
BlockData chainTip = Controller.getInstance().getChainTip();
if (chainTip == null) {
throw new DataException("Unable to determine chain tip when checking if a block is pruned");
}
if (height == 1) {
// We don't prune the genesis block
return false;
}
final int ourLatestHeight = chainTip.getHeight();
final int latestUnprunedHeight = ourLatestHeight - this.pruneBlockLimit;
return (height < latestUnprunedHeight);
}
}

View File

@@ -245,17 +245,17 @@ public class TradeBot implements Listener {
}
}
/*package*/ static byte[] generateTradePrivateKey() {
public static byte[] generateTradePrivateKey() {
// The private key is used for both Curve25519 and secp256k1 so needs to be valid for both.
// Curve25519 accepts any seed, so generate a valid secp256k1 key and use that.
return new ECKey().getPrivKeyBytes();
}
/*package*/ static byte[] deriveTradeNativePublicKey(byte[] privateKey) {
public static byte[] deriveTradeNativePublicKey(byte[] privateKey) {
return PrivateKeyAccount.toPublicKey(privateKey);
}
/*package*/ static byte[] deriveTradeForeignPublicKey(byte[] privateKey) {
public static byte[] deriveTradeForeignPublicKey(byte[] privateKey) {
return ECKey.fromPrivate(privateKey).getPubKey();
}

View File

@@ -1,5 +1,8 @@
package org.qortal.crypto;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
@@ -75,12 +78,74 @@ public abstract class Crypto {
return digest(digest(input));
}
/**
* Returns 32-byte SHA-256 digest of file passed in input.
*
* @param file
* file in which to perform digest
* @return byte[32] digest, or null if SHA-256 algorithm can't be accessed
*
* @throws IOException if the file cannot be read
*/
public static byte[] digest(File file) throws IOException {
return Crypto.digest(file, 8192);
}
/**
* Returns 32-byte SHA-256 digest of file passed in input, in hex format
*
* @param file
* file in which to perform digest
* @return String digest as a hexadecimal string, or null if SHA-256 algorithm can't be accessed
*
* @throws IOException if the file cannot be read
*/
public static String digestHexString(File file, int bufferSize) throws IOException {
byte[] digest = Crypto.digest(file, bufferSize);
// Convert to hex
StringBuilder stringBuilder = new StringBuilder();
for (byte b : digest) {
stringBuilder.append(String.format("%02x", b));
}
return stringBuilder.toString();
}
/**
* Returns 32-byte SHA-256 digest of file passed in input.
*
* @param file
* file in which to perform digest
* @param bufferSize
* the number of bytes to load into memory
* @return byte[32] digest, or null if SHA-256 algorithm can't be accessed
*
* @throws IOException if the file cannot be read
*/
public static byte[] digest(File file, int bufferSize) throws IOException {
try {
MessageDigest sha256 = MessageDigest.getInstance("SHA-256");
FileInputStream fileInputStream = new FileInputStream(file);
byte[] bytes = new byte[bufferSize];
int count;
while ((count = fileInputStream.read(bytes)) != -1) {
sha256.update(bytes, 0, count);
}
fileInputStream.close();
return sha256.digest();
} catch (NoSuchAlgorithmException e) {
throw new RuntimeException("SHA-256 message digest not available");
}
}
/**
* Returns 64-byte duplicated digest of message passed in input.
* <p>
* Effectively <tt>Bytes.concat(digest(input), digest(input)).
*
* @param addressVersion
*
* @param input
*/
public static byte[] dupDigest(byte[] input) {

View File

@@ -4,10 +4,12 @@ import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlTransient;
import org.json.JSONObject;
import org.qortal.crypto.Crypto;
import io.swagger.v3.oas.annotations.media.Schema;
import io.swagger.v3.oas.annotations.media.Schema.AccessMode;
import org.qortal.utils.Base58;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
@@ -61,4 +63,21 @@ public class MintingAccountData {
return this.publicKey;
}
// JSON
public JSONObject toJson() {
JSONObject jsonObject = new JSONObject();
jsonObject.put("privateKey", Base58.encode(this.getPrivateKey()));
jsonObject.put("publicKey", Base58.encode(this.getPublicKey()));
return jsonObject;
}
public static MintingAccountData fromJson(JSONObject json) {
return new MintingAccountData(
json.isNull("privateKey") ? null : Base58.decode(json.getString("privateKey")),
json.isNull("publicKey") ? null : Base58.decode(json.getString("publicKey"))
);
}
}

View File

@@ -0,0 +1,47 @@
package org.qortal.data.block;
import org.qortal.block.Block;
public class BlockArchiveData {
// Properties
private byte[] signature;
private Integer height;
private Long timestamp;
private byte[] minterPublicKey;
// Constructors
public BlockArchiveData(byte[] signature, Integer height, long timestamp, byte[] minterPublicKey) {
this.signature = signature;
this.height = height;
this.timestamp = timestamp;
this.minterPublicKey = minterPublicKey;
}
public BlockArchiveData(BlockData blockData) {
this.signature = blockData.getSignature();
this.height = blockData.getHeight();
this.timestamp = blockData.getTimestamp();
this.minterPublicKey = blockData.getMinterPublicKey();
}
// Getters/setters
public byte[] getSignature() {
return this.signature;
}
public Integer getHeight() {
return this.height;
}
public Long getTimestamp() {
return this.timestamp;
}
public byte[] getMinterPublicKey() {
return this.minterPublicKey;
}
}

View File

@@ -6,9 +6,11 @@ import java.util.List;
import java.awt.image.BufferedImage;
import javax.swing.*;
import javax.swing.border.EmptyBorder;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
public class SplashFrame {
@@ -16,6 +18,7 @@ public class SplashFrame {
private static SplashFrame instance;
private JFrame splashDialog;
private SplashPanel splashPanel;
@SuppressWarnings("serial")
public static class SplashPanel extends JPanel {
@@ -23,26 +26,53 @@ public class SplashFrame {
private String defaultSplash = "Qlogo_512.png";
private JLabel statusLabel;
public SplashPanel() {
image = Gui.loadImage(defaultSplash);
setOpaque(false);
setLayout(new GridBagLayout());
}
setOpaque(true);
setLayout(new BoxLayout(this, BoxLayout.Y_AXIS));
setBorder(new EmptyBorder(10, 10, 10, 10));
setBackground(Color.BLACK);
@Override
protected void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, getWidth(), getHeight(), this);
// Add logo
JLabel imageLabel = new JLabel(new ImageIcon(image));
imageLabel.setSize(new Dimension(300, 300));
add(imageLabel);
// Add spacing
add(Box.createRigidArea(new Dimension(0, 16)));
// Add status label
String text = String.format("Starting Qortal Core v%s...", Controller.getInstance().getVersionStringWithoutPrefix());
statusLabel = new JLabel(text, JLabel.CENTER);
statusLabel.setMaximumSize(new Dimension(500, 50));
statusLabel.setFont(new Font("Verdana", Font.PLAIN, 20));
statusLabel.setBackground(Color.BLACK);
statusLabel.setForeground(new Color(255, 255, 255, 255));
statusLabel.setOpaque(true);
statusLabel.setBorder(null);
add(statusLabel);
}
@Override
public Dimension getPreferredSize() {
return new Dimension(500, 500);
return new Dimension(500, 580);
}
public void updateStatus(String text) {
if (statusLabel != null) {
statusLabel.setText(text);
}
}
}
private SplashFrame() {
if (GraphicsEnvironment.isHeadless()) {
return;
}
this.splashDialog = new JFrame();
List<Image> icons = new ArrayList<>();
@@ -55,12 +85,13 @@ public class SplashFrame {
icons.add(Gui.loadImage("icons/Qlogo_128.png"));
this.splashDialog.setIconImages(icons);
this.splashDialog.getContentPane().add(new SplashPanel());
this.splashPanel = new SplashPanel();
this.splashDialog.getContentPane().add(this.splashPanel);
this.splashDialog.setDefaultCloseOperation(JFrame.DO_NOTHING_ON_CLOSE);
this.splashDialog.setUndecorated(true);
this.splashDialog.pack();
this.splashDialog.setLocationRelativeTo(null);
this.splashDialog.setBackground(new Color(0,0,0,0));
this.splashDialog.setBackground(Color.BLACK);
this.splashDialog.setVisible(true);
}
@@ -79,4 +110,10 @@ public class SplashFrame {
this.splashDialog.dispose();
}
public void updateStatus(String text) {
if (this.splashPanel != null) {
this.splashPanel.updateStatus(text);
}
}
}

View File

@@ -155,9 +155,23 @@ public class Network {
}
// Load all known peers from repository
try (Repository repository = RepositoryManager.getRepository()) {
synchronized (this.allKnownPeers) {
this.allKnownPeers.addAll(repository.getNetworkRepository().getAllPeers());
synchronized (this.allKnownPeers) { List<String> fixedNetwork = Settings.getInstance().getFixedNetwork();
if (fixedNetwork != null && !fixedNetwork.isEmpty()) {
Long addedWhen = NTP.getTime();
String addedBy = "fixedNetwork";
List<PeerAddress> peerAddresses = new ArrayList<>();
for (String address : fixedNetwork) {
PeerAddress peerAddress = PeerAddress.fromString(address);
peerAddresses.add(peerAddress);
}
List<PeerData> peers = peerAddresses.stream()
.map(peerAddress -> new PeerData(peerAddress, addedWhen, addedBy))
.collect(Collectors.toList());
this.allKnownPeers.addAll(peers);
} else {
try (Repository repository = RepositoryManager.getRepository()) {
this.allKnownPeers.addAll(repository.getNetworkRepository().getAllPeers());
}
}
}
@@ -513,14 +527,24 @@ public class Network {
if (socketChannel == null) {
return;
}
PeerAddress address = PeerAddress.fromSocket(socketChannel.socket());
List<String> fixedNetwork = Settings.getInstance().getFixedNetwork();
if (fixedNetwork != null && !fixedNetwork.isEmpty() && ipNotInFixedList(address, fixedNetwork)) {
try {
LOGGER.debug("Connection discarded from peer {} as not in the fixed network list", address);
socketChannel.close();
} catch (IOException e) {
// IGNORE
}
return;
}
final Long now = NTP.getTime();
Peer newPeer;
try {
if (now == null) {
LOGGER.debug("Connection discarded from peer {} due to lack of NTP sync",
PeerAddress.fromSocket(socketChannel.socket()));
LOGGER.debug("Connection discarded from peer {} due to lack of NTP sync", address);
socketChannel.close();
return;
}
@@ -528,12 +552,12 @@ public class Network {
synchronized (this.connectedPeers) {
if (connectedPeers.size() >= maxPeers) {
// We have enough peers
LOGGER.debug("Connection discarded from peer {}", PeerAddress.fromSocket(socketChannel.socket()));
LOGGER.debug("Connection discarded from peer {} because the server is full", address);
socketChannel.close();
return;
}
LOGGER.debug("Connection accepted from peer {}", PeerAddress.fromSocket(socketChannel.socket()));
LOGGER.debug("Connection accepted from peer {}", address);
newPeer = new Peer(socketChannel, channelSelector);
this.connectedPeers.add(newPeer);
@@ -541,6 +565,7 @@ public class Network {
} catch (IOException e) {
if (socketChannel.isOpen()) {
try {
LOGGER.debug("Connection failed from peer {} while connecting/closing", address);
socketChannel.close();
} catch (IOException ce) {
// Couldn't close?
@@ -552,6 +577,16 @@ public class Network {
this.onPeerReady(newPeer);
}
private boolean ipNotInFixedList(PeerAddress address, List<String> fixedNetwork) {
for (String ipAddress : fixedNetwork) {
String[] bits = ipAddress.split(":");
if (bits.length >= 1 && bits.length <= 2 && address.getHost().equals(bits[0])) {
return false;
}
}
return true;
}
private Peer getConnectablePeer(final Long now) throws InterruptedException {
// We can't block here so use tryRepository(). We don't NEED to connect a new peer.
try (Repository repository = RepositoryManager.tryRepository()) {
@@ -653,7 +688,7 @@ public class Network {
if (peersToDisconnect != null && peersToDisconnect.size() > 0) {
for (Peer peer : peersToDisconnect) {
LOGGER.info("Forcing disconnection of peer {} because connection age ({} ms) " +
LOGGER.debug("Forcing disconnection of peer {} because connection age ({} ms) " +
"has reached the maximum ({} ms)", peer, peer.getConnectionAge(), peer.getMaxConnectionAge());
peer.disconnect("Connection age too old");
}
@@ -1145,6 +1180,10 @@ public class Network {
private boolean mergePeers(Repository repository, String addedBy, long addedWhen, List<PeerAddress> peerAddresses)
throws DataException {
List<String> fixedNetwork = Settings.getInstance().getFixedNetwork();
if (fixedNetwork != null && !fixedNetwork.isEmpty()) {
return false;
}
List<PeerData> newPeers;
synchronized (this.allKnownPeers) {
for (PeerData knownPeerData : this.allKnownPeers) {

View File

@@ -23,7 +23,7 @@ public class CachedBlockMessage extends Message {
this.block = block;
}
private CachedBlockMessage(byte[] cachedBytes) {
public CachedBlockMessage(byte[] cachedBytes) {
super(MessageType.BLOCK);
this.block = null;

View File

@@ -1,5 +1,7 @@
package org.qortal.repository;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.List;
import java.util.Set;
@@ -112,6 +114,14 @@ public interface ATRepository {
*/
public List<ATStateData> getBlockATStatesAtHeight(int height) throws DataException;
/** Rebuild the latest AT states cache, necessary for AT state trimming/pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void rebuildLatestAtStates() throws DataException;
/** Returns height of first trimmable AT state. */
public int getAtTrimHeight() throws DataException;
@@ -121,12 +131,27 @@ public interface ATRepository {
*/
public void setAtTrimHeight(int trimHeight) throws DataException;
/** Hook to allow repository to prepare/cache info for AT state trimming. */
public void prepareForAtStateTrimming() throws DataException;
/** Trims full AT state data between passed heights. Returns number of trimmed rows. */
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException;
/** Returns height of first prunable AT state. */
public int getAtPruneHeight() throws DataException;
/** Sets new base height for AT state pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setAtPruneHeight(int pruneHeight) throws DataException;
/** Prunes full AT state data between passed heights. Returns number of pruned rows. */
public int pruneAtStates(int minHeight, int maxHeight) throws DataException;
/** Checks for the presence of the ATStatesHeightIndex in repository */
public boolean hasAtStatesHeightIndex() throws DataException;
/**
* Save ATStateData into repository.
* <p>

View File

@@ -191,6 +191,8 @@ public interface AccountRepository {
public List<MintingAccountData> getMintingAccounts() throws DataException;
public MintingAccountData getMintingAccount(byte[] mintingAccountKey) throws DataException;
public void save(MintingAccountData mintingAccountData) throws DataException;
/** Delete minting account info, used by BlockMinter, from repository using passed public or private key. */

View File

@@ -0,0 +1,284 @@
package org.qortal.repository;
import com.google.common.primitives.Ints;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Triple;
import static org.qortal.transform.Transformer.INT_LENGTH;
import java.io.*;
import java.nio.ByteBuffer;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
public class BlockArchiveReader {
private static BlockArchiveReader instance;
private Map<String, Triple<Integer, Integer, Integer>> fileListCache = Collections.synchronizedMap(new HashMap<>());
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveReader.class);
public BlockArchiveReader() {
}
public static synchronized BlockArchiveReader getInstance() {
if (instance == null) {
instance = new BlockArchiveReader();
}
return instance;
}
private void fetchFileList() {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
File archiveDirFile = archivePath.toFile();
String[] files = archiveDirFile.list();
Map<String, Triple<Integer, Integer, Integer>> map = new HashMap<>();
if (files != null) {
for (String file : files) {
Path filePath = Paths.get(file);
String filename = filePath.getFileName().toString();
// Parse the filename
if (filename == null || !filename.contains("-") || !filename.contains(".")) {
// Not a usable file
continue;
}
// Remove the extension and split into two parts
String[] parts = filename.substring(0, filename.lastIndexOf('.')).split("-");
Integer startHeight = Integer.parseInt(parts[0]);
Integer endHeight = Integer.parseInt(parts[1]);
Integer range = endHeight - startHeight;
map.put(filename, new Triple(startHeight, endHeight, range));
}
}
this.fileListCache = map;
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockAtHeight(int height) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
byte[] serializedBytes = this.fetchSerializedBlockBytesForHeight(height);
if (serializedBytes == null) {
return null;
}
ByteBuffer byteBuffer = ByteBuffer.wrap(serializedBytes);
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = null;
try {
blockInfo = BlockTransformer.fromByteBuffer(byteBuffer);
if (blockInfo != null && blockInfo.getA() != null) {
// Block height is stored outside of the main serialized bytes, so it
// won't be set automatically.
blockInfo.getA().setHeight(height);
}
} catch (TransformationException e) {
return null;
}
return blockInfo;
}
public Triple<BlockData, List<TransactionData>, List<ATStateData>> fetchBlockWithSignature(
byte[] signature, Repository repository) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
Integer height = this.fetchHeightForSignature(signature, repository);
if (height != null) {
return this.fetchBlockAtHeight(height);
}
return null;
}
public List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> fetchBlocksFromRange(
int startHeight, int endHeight) {
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList = new ArrayList<>();
for (int height = startHeight; height <= endHeight; height++) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = this.fetchBlockAtHeight(height);
if (blockInfo == null) {
return blockInfoList;
}
blockInfoList.add(blockInfo);
}
return blockInfoList;
}
public Integer fetchHeightForSignature(byte[] signature, Repository repository) {
// Lookup the height for the requested signature
try {
BlockArchiveData archivedBlock = repository.getBlockArchiveRepository().getBlockArchiveDataForSignature(signature);
if (archivedBlock == null) {
return null;
}
return archivedBlock.getHeight();
} catch (DataException e) {
return null;
}
}
public int fetchHeightForTimestamp(long timestamp, Repository repository) {
// Lookup the height for the requested signature
try {
return repository.getBlockArchiveRepository().getHeightFromTimestamp(timestamp);
} catch (DataException e) {
return 0;
}
}
private String getFilenameForHeight(int height) {
Iterator it = this.fileListCache.entrySet().iterator();
while (it.hasNext()) {
Map.Entry pair = (Map.Entry)it.next();
if (pair == null && pair.getKey() == null && pair.getValue() == null) {
continue;
}
Triple<Integer, Integer, Integer> heightInfo = (Triple<Integer, Integer, Integer>) pair.getValue();
Integer startHeight = heightInfo.getA();
Integer endHeight = heightInfo.getB();
if (height >= startHeight && height <= endHeight) {
// Found the correct file
String filename = (String) pair.getKey();
return filename;
}
}
return null;
}
public byte[] fetchSerializedBlockBytesForSignature(byte[] signature, boolean includeHeightPrefix, Repository repository) {
if (this.fileListCache.isEmpty()) {
this.fetchFileList();
}
Integer height = this.fetchHeightForSignature(signature, repository);
if (height != null) {
byte[] blockBytes = this.fetchSerializedBlockBytesForHeight(height);
if (blockBytes == null) {
return null;
}
// When responding to a peer with a BLOCK message, we must prefix the byte array with the block height
// This mimics the toData() method in BlockMessage and CachedBlockMessage
if (includeHeightPrefix) {
ByteArrayOutputStream bytes = new ByteArrayOutputStream(blockBytes.length + INT_LENGTH);
try {
bytes.write(Ints.toByteArray(height));
bytes.write(blockBytes);
return bytes.toByteArray();
} catch (IOException e) {
return null;
}
}
return blockBytes;
}
return null;
}
public byte[] fetchSerializedBlockBytesForHeight(int height) {
String filename = this.getFilenameForHeight(height);
if (filename == null) {
// We don't have this block in the archive
// Invalidate the file list cache in case it is out of date
this.invalidateFileListCache();
return null;
}
Path filePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive", filename).toAbsolutePath();
RandomAccessFile file = null;
try {
file = new RandomAccessFile(filePath.toString(), "r");
// Get info about this file (the "fixed length header")
final int version = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int startHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int endHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
file.readInt(); // Block count (unused) // Do not remove or comment out, as it is moving the file pointer
final int variableHeaderLength = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
final int fixedHeaderLength = (int)file.getFilePointer();
// End of fixed length header
// Make sure the version is one we recognize
if (version != 1) {
LOGGER.info("Error: unknown version in file {}: {}", filename, version);
return null;
}
// Verify that the block is within the reported range
if (height < startHeight || height > endHeight) {
LOGGER.info("Error: requested height {} but the range of file {} is {}-{}",
height, filename, startHeight, endHeight);
return null;
}
// Seek to the location of the block index in the variable length header
final int locationOfBlockIndexInVariableHeaderSegment = (height - startHeight) * INT_LENGTH;
file.seek(fixedHeaderLength + locationOfBlockIndexInVariableHeaderSegment);
// Read the value to obtain the index of this block in the data segment
int locationOfBlockInDataSegment = file.readInt();
// Now seek to the block data itself
int dataSegmentStartIndex = fixedHeaderLength + variableHeaderLength + INT_LENGTH; // Confirmed correct
file.seek(dataSegmentStartIndex + locationOfBlockInDataSegment);
// Read the block metadata
int blockHeight = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
int blockLength = file.readInt(); // Do not remove or comment out, as it is moving the file pointer
// Ensure the block height matches the one requested
if (blockHeight != height) {
LOGGER.info("Error: height {} does not match requested: {}", blockHeight, height);
return null;
}
// Now retrieve the block's serialized bytes
byte[] blockBytes = new byte[blockLength];
file.read(blockBytes);
return blockBytes;
} catch (FileNotFoundException e) {
LOGGER.info("File {} not found: {}", filename, e.getMessage());
return null;
} catch (IOException e) {
LOGGER.info("Unable to read block {} from archive: {}", height, e.getMessage());
return null;
}
finally {
// Close the file
if (file != null) {
try {
file.close();
} catch (IOException e) {
// Failed to close, but no need to handle this
}
}
}
}
public void invalidateFileListCache() {
this.fileListCache.clear();
}
}

View File

@@ -0,0 +1,130 @@
package org.qortal.repository;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import java.util.List;
public interface BlockArchiveRepository {
/**
* Returns BlockData from archive using block signature.
*
* @param signature
* @return block data, or null if not found in archive.
* @throws DataException
*/
public BlockData fromSignature(byte[] signature) throws DataException;
/**
* Return height of block in archive using block's signature.
*
* @param signature
* @return height, or 0 if not found in blockchain.
* @throws DataException
*/
public int getHeightFromSignature(byte[] signature) throws DataException;
/**
* Returns BlockData from archive using block height.
*
* @param height
* @return block data, or null if not found in blockchain.
* @throws DataException
*/
public BlockData fromHeight(int height) throws DataException;
/**
* Returns a list of BlockData objects from archive using
* block height range.
*
* @param startHeight
* @return a list of BlockData objects, or an empty list if
* not found in blockchain. It is not guaranteed that all
* requested blocks will be returned.
* @throws DataException
*/
public List<BlockData> fromRange(int startHeight, int endHeight) throws DataException;
/**
* Returns BlockData from archive using block reference.
* Currently relies on a child block being the one block
* higher than its parent. This limitation can be removed
* by storing the reference in the BlockArchive table, but
* this has been avoided to reduce space.
*
* @param reference
* @return block data, or null if either parent or child
* not found in the archive.
* @throws DataException
*/
public BlockData fromReference(byte[] reference) throws DataException;
/**
* Return height of block with timestamp just before passed timestamp.
*
* @param timestamp
* @return height, or 0 if not found in blockchain.
* @throws DataException
*/
public int getHeightFromTimestamp(long timestamp) throws DataException;
/**
* Returns block summaries for blocks signed by passed public key, or reward-share with minter with passed public key.
*/
public List<BlockSummaryData> getBlockSummariesBySigner(byte[] signerPublicKey, Integer limit, Integer offset, Boolean reverse) throws DataException;
/**
* Returns summaries of block signers, optionally limited to passed addresses.
* This combines both the BlockArchive and the Blocks data into a single result set.
*/
public List<BlockSignerSummary> getBlockSigners(List<String> addresses, Integer limit, Integer offset, Boolean reverse) throws DataException;
/** Returns height of first unarchived block. */
public int getBlockArchiveHeight() throws DataException;
/** Sets new height for block archiving.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setBlockArchiveHeight(int archiveHeight) throws DataException;
/**
* Returns the block archive data for a given signature, from the block archive.
* <p>
* This method will return null if no block archive has been built for the
* requested signature. In those cases, the height (and other data) can be
* looked up using the Blocks table. This allows a block to be located in
* the archive when we only know its signature.
* <p>
*
* @param signature
* @throws DataException
*/
public BlockArchiveData getBlockArchiveDataForSignature(byte[] signature) throws DataException;
/**
* Saves a block archive entry into the repository.
* <p>
* This can be used to find the height of a block by its signature, without
* having access to the block data itself.
* <p>
*
* @param blockArchiveData
* @throws DataException
*/
public void save(BlockArchiveData blockArchiveData) throws DataException;
/**
* Deletes a block archive entry from the repository.
*
* @param blockArchiveData
* @throws DataException
*/
public void delete(BlockArchiveData blockArchiveData) throws DataException;
}

View File

@@ -0,0 +1,201 @@
package org.qortal.repository;
import com.google.common.primitives.Ints;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.Block;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.settings.Settings;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class BlockArchiveWriter {
public enum BlockArchiveWriteResult {
OK,
STOPPING,
NOT_ENOUGH_BLOCKS,
BLOCK_NOT_FOUND
}
private static final Logger LOGGER = LogManager.getLogger(BlockArchiveWriter.class);
public static final long DEFAULT_FILE_SIZE_TARGET = 100 * 1024 * 1024; // 100MiB
private int startHeight;
private final int endHeight;
private final Repository repository;
private long fileSizeTarget = DEFAULT_FILE_SIZE_TARGET;
private boolean shouldEnforceFileSizeTarget = true;
private int writtenCount;
private int lastWrittenHeight;
private Path outputPath;
public BlockArchiveWriter(int startHeight, int endHeight, Repository repository) {
this.startHeight = startHeight;
this.endHeight = endHeight;
this.repository = repository;
}
public static int getMaxArchiveHeight(Repository repository) throws DataException {
// We must only archive trimmed blocks, or the archive will grow far too large
final int accountSignaturesTrimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
final int atTrimStartHeight = repository.getATRepository().getAtTrimHeight();
final int trimStartHeight = Math.min(accountSignaturesTrimStartHeight, atTrimStartHeight);
return trimStartHeight - 1; // subtract 1 because these values represent the first _untrimmed_ block
}
public static boolean isArchiverUpToDate(Repository repository) throws DataException {
final int maxArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
final int actualArchiveHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
final float progress = (float)actualArchiveHeight / (float) maxArchiveHeight;
LOGGER.debug(String.format("maxArchiveHeight: %d, actualArchiveHeight: %d, progress: %f",
maxArchiveHeight, actualArchiveHeight, progress));
// If archiver is within 95% of the maximum, treat it as up to date
// We need several percent as an allowance because the archiver will only
// save files when they reach the target size
return (progress >= 0.95);
}
public BlockArchiveWriteResult write() throws DataException, IOException, TransformationException, InterruptedException {
// Create the archive folder if it doesn't exist
// This is a subfolder of the db directory, to make bootstrapping easier
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
Files.createDirectories(archivePath);
} catch (IOException e) {
LOGGER.info("Unable to create archive folder");
throw new DataException("Unable to create archive folder");
}
// Determine start height of blocks to fetch
if (startHeight <= 2) {
// Skip genesis block, as it's not designed to be transmitted, and we can build that from blockchain.json
// TODO: include genesis block if we can
startHeight = 2;
}
// Header bytes will store the block indexes
ByteArrayOutputStream headerBytes = new ByteArrayOutputStream();
// Bytes will store the actual block data
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
LOGGER.info(String.format("Fetching blocks from height %d...", startHeight));
int i = 0;
while (headerBytes.size() + bytes.size() < this.fileSizeTarget
|| this.shouldEnforceFileSizeTarget == false) {
if (Controller.isStopping()) {
return BlockArchiveWriteResult.STOPPING;
}
if (Controller.getInstance().isSynchronizing()) {
continue;
}
int currentHeight = startHeight + i;
if (currentHeight > endHeight) {
break;
}
//LOGGER.info("Fetching block {}...", currentHeight);
BlockData blockData = repository.getBlockRepository().fromHeight(currentHeight);
if (blockData == null) {
return BlockArchiveWriteResult.BLOCK_NOT_FOUND;
}
// Write the signature and height into the BlockArchive table
BlockArchiveData blockArchiveData = new BlockArchiveData(blockData);
repository.getBlockArchiveRepository().save(blockArchiveData);
repository.saveChanges();
// Write the block data to some byte buffers
Block block = new Block(repository, blockData);
int blockIndex = bytes.size();
// Write block index to header
headerBytes.write(Ints.toByteArray(blockIndex));
// Write block height
bytes.write(Ints.toByteArray(block.getBlockData().getHeight()));
byte[] blockBytes = BlockTransformer.toBytes(block);
// Write block length
bytes.write(Ints.toByteArray(blockBytes.length));
// Write block bytes
bytes.write(blockBytes);
i++;
}
int totalLength = headerBytes.size() + bytes.size();
LOGGER.info(String.format("Total length of %d blocks is %d bytes", i, totalLength));
// Validate file size, in case something went wrong
if (totalLength < fileSizeTarget && this.shouldEnforceFileSizeTarget) {
return BlockArchiveWriteResult.NOT_ENOUGH_BLOCKS;
}
// We have enough blocks to create a new file
int endHeight = startHeight + i - 1;
int version = 1;
String filePath = String.format("%s/%d-%d.dat", archivePath.toString(), startHeight, endHeight);
FileOutputStream fileOutputStream = new FileOutputStream(filePath);
// Write version number
fileOutputStream.write(Ints.toByteArray(version));
// Write start height
fileOutputStream.write(Ints.toByteArray(startHeight));
// Write end height
fileOutputStream.write(Ints.toByteArray(endHeight));
// Write total count
fileOutputStream.write(Ints.toByteArray(i));
// Write dynamic header (block indexes) segment length
fileOutputStream.write(Ints.toByteArray(headerBytes.size()));
// Write dynamic header (block indexes) data
headerBytes.writeTo(fileOutputStream);
// Write data segment (block data) length
fileOutputStream.write(Ints.toByteArray(bytes.size()));
// Write data
bytes.writeTo(fileOutputStream);
// Close the file
fileOutputStream.close();
// Invalidate cache so that the rest of the app picks up the new file
BlockArchiveReader.getInstance().invalidateFileListCache();
this.writtenCount = i;
this.lastWrittenHeight = endHeight;
this.outputPath = Paths.get(filePath);
return BlockArchiveWriteResult.OK;
}
public int getWrittenCount() {
return this.writtenCount;
}
public int getLastWrittenHeight() {
return this.lastWrittenHeight;
}
public Path getOutputPath() {
return this.outputPath;
}
public void setFileSizeTarget(long fileSizeTarget) {
this.fileSizeTarget = fileSizeTarget;
}
// For testing, to avoid having to pre-calculate file sizes
public void setShouldEnforceFileSizeTarget(boolean shouldEnforceFileSizeTarget) {
this.shouldEnforceFileSizeTarget = shouldEnforceFileSizeTarget;
}
}

View File

@@ -137,11 +137,6 @@ public interface BlockRepository {
*/
public List<BlockSummaryData> getBlockSummaries(int firstBlockHeight, int lastBlockHeight) throws DataException;
/**
* Returns block summaries for the passed height range, for API use.
*/
public List<BlockSummaryData> getBlockSummaries(Integer startHeight, Integer endHeight, Integer count) throws DataException;
/** Returns height of first trimmable online accounts signatures. */
public int getOnlineAccountsSignaturesTrimHeight() throws DataException;
@@ -166,6 +161,20 @@ public interface BlockRepository {
*/
public BlockData getDetachedBlockSignature(int startHeight) throws DataException;
/** Returns height of first prunable block. */
public int getBlockPruneHeight() throws DataException;
/** Sets new base height for block pruning.
* <p>
* NOTE: performs implicit <tt>repository.saveChanges()</tt>.
*/
public void setBlockPruneHeight(int pruneHeight) throws DataException;
/** Prunes full block data between passed heights. Returns number of pruned rows. */
public int pruneBlocks(int minHeight, int maxHeight) throws DataException;
/**
* Saves block into repository.
*

View File

@@ -0,0 +1,509 @@
package org.qortal.repository;
import org.apache.commons.io.FileUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.BlockChain;
import org.qortal.controller.Controller;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.block.BlockData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.gui.SplashFrame;
import org.qortal.repository.hsqldb.HSQLDBImportExport;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
import org.qortal.utils.SevenZ;
import java.io.BufferedInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;
import java.nio.file.*;
import java.security.SecureRandom;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.locks.ReentrantLock;
import static java.nio.file.StandardCopyOption.REPLACE_EXISTING;
public class Bootstrap {
private Repository repository;
private int retryMinutes = 1;
private static final Logger LOGGER = LogManager.getLogger(Bootstrap.class);
/** The maximum number of untrimmed blocks allowed to be included in a bootstrap, beyond the trim threshold */
private static final int MAXIMUM_UNTRIMMED_BLOCKS = 100;
/** The maximum number of unpruned blocks allowed to be included in a bootstrap, beyond the prune threshold */
private static final int MAXIMUM_UNPRUNED_BLOCKS = 100;
public Bootstrap() {
}
public Bootstrap(Repository repository) {
this.repository = repository;
}
/**
* canCreateBootstrap()
* Performs basic initial checks to ensure everything is in order
* @return true if ready for bootstrap creation, or an exception if not
* All failure reasons are logged and included in the exception
* @throws DataException
*/
public boolean checkRepositoryState() throws DataException {
LOGGER.info("Checking repository state...");
final boolean isTopOnly = Settings.getInstance().isTopOnly();
final boolean archiveEnabled = Settings.getInstance().isArchiveEnabled();
// Make sure we have a repository instance
if (repository == null) {
throw new DataException("Repository instance required to check if we can create a bootstrap.");
}
// Require that a block archive has been built
if (!isTopOnly && !archiveEnabled) {
throw new DataException("Unable to create bootstrap because the block archive isn't enabled. " +
"Set {\"archivedEnabled\": true} in settings.json to fix.");
}
// Make sure that the block archiver is up to date
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (!upToDate) {
throw new DataException("Unable to create bootstrap because the block archive isn't fully built yet.");
}
// Ensure that this database contains the ATStatesHeightIndex which was missing in some cases
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
if (!hasAtStatesHeightIndex) {
throw new DataException("Unable to create bootstrap due to missing ATStatesHeightIndex. A re-sync from genesis is needed.");
}
// Ensure we have synced NTP time
if (NTP.getTime() == null) {
throw new DataException("Unable to create bootstrap because the node hasn't synced its time yet.");
}
// Ensure the chain is synced
final BlockData chainTip = Controller.getInstance().getChainTip();
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null || chainTip.getTimestamp() < minLatestBlockTimestamp) {
throw new DataException("Unable to create bootstrap because the blockchain isn't fully synced.");
}
// FUTURE: ensure trim and prune settings are using default values
if (!isTopOnly) {
// We don't trim in top-only mode because we prune the blocks instead
// If we're not in top-only mode we should make sure that trimming is up to date
// Ensure that the online account signatures have been fully trimmed
final int accountsTrimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
final long accountsUpperTrimmableTimestamp = NTP.getTime() - BlockChain.getInstance().getOnlineAccountSignaturesMaxLifetime();
final int accountsUpperTrimmableHeight = repository.getBlockRepository().getHeightFromTimestamp(accountsUpperTrimmableTimestamp);
final int accountsBlocksRemaining = accountsUpperTrimmableHeight - accountsTrimStartHeight;
if (accountsBlocksRemaining > MAXIMUM_UNTRIMMED_BLOCKS) {
throw new DataException(String.format("Blockchain is not fully trimmed. Please allow the node to run for longer, " +
"then try again. Blocks remaining (online accounts signatures): %d", accountsBlocksRemaining));
}
// Ensure that the AT states data has been fully trimmed
final int atTrimStartHeight = repository.getATRepository().getAtTrimHeight();
final long atUpperTrimmableTimestamp = chainTip.getTimestamp() - Settings.getInstance().getAtStatesMaxLifetime();
final int atUpperTrimmableHeight = repository.getBlockRepository().getHeightFromTimestamp(atUpperTrimmableTimestamp);
final int atBlocksRemaining = atUpperTrimmableHeight - atTrimStartHeight;
if (atBlocksRemaining > MAXIMUM_UNTRIMMED_BLOCKS) {
throw new DataException(String.format("Blockchain is not fully trimmed. Please allow the node to run " +
"for longer, then try again. Blocks remaining (AT states): %d", atBlocksRemaining));
}
}
// Ensure that blocks have been fully pruned
final int blockPruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
int blockUpperPrunableHeight = chainTip.getHeight() - Settings.getInstance().getPruneBlockLimit();
if (archiveEnabled) {
blockUpperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
}
final int blocksPruneRemaining = blockUpperPrunableHeight - blockPruneStartHeight;
if (blocksPruneRemaining > MAXIMUM_UNPRUNED_BLOCKS) {
throw new DataException(String.format("Blockchain is not fully pruned. Please allow the node to run " +
"for longer, then try again. Blocks remaining: %d", blocksPruneRemaining));
}
// Ensure that AT states have been fully pruned
final int atPruneStartHeight = repository.getATRepository().getAtPruneHeight();
int atUpperPrunableHeight = chainTip.getHeight() - Settings.getInstance().getPruneBlockLimit();
if (archiveEnabled) {
atUpperPrunableHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1;
}
final int atPruneRemaining = atUpperPrunableHeight - atPruneStartHeight;
if (atPruneRemaining > MAXIMUM_UNPRUNED_BLOCKS) {
throw new DataException(String.format("Blockchain is not fully pruned. Please allow the node to run " +
"for longer, then try again. Blocks remaining (AT states): %d", atPruneRemaining));
}
LOGGER.info("Repository state checks passed");
return true;
}
/**
* validateBlockchain
* Performs quick validation of recent blocks in blockchain, prior to creating a bootstrap
* @return true if valid, an exception if not
* @throws DataException
*/
public boolean validateBlockchain() throws DataException {
LOGGER.info("Validating blockchain...");
try {
BlockChain.validate();
LOGGER.info("Blockchain is valid");
return true;
} catch (DataException e) {
throw new DataException(String.format("Blockchain validation failed: %s", e.getMessage()));
}
}
/**
* validateCompleteBlockchain
* Performs intensive validation of all blocks in blockchain
* @return true if valid, false if not
*/
public boolean validateCompleteBlockchain() {
LOGGER.info("Validating blockchain...");
try {
// Perform basic startup validation
BlockChain.validate();
// Perform more intensive full-chain validation
BlockChain.validateAllBlocks();
LOGGER.info("Blockchain is valid");
return true;
} catch (DataException e) {
LOGGER.info("Blockchain validation failed: {}", e.getMessage());
return false;
}
}
public String create() throws DataException, InterruptedException, IOException {
// Make sure we have a repository instance
if (repository == null) {
throw new DataException("Repository instance required in order to create a boostrap");
}
LOGGER.info("Deleting temp directory if it exists...");
this.deleteAllTempDirectories();
LOGGER.info("Acquiring blockchain lock...");
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lockInterruptibly();
Path inputPath = null;
Path outputPath = null;
try {
LOGGER.info("Exporting local data...");
repository.exportNodeLocalData();
LOGGER.info("Deleting trade bot states...");
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
for (TradeBotData tradeBotData : allTradeBotData) {
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
}
LOGGER.info("Deleting minting accounts...");
List<MintingAccountData> mintingAccounts = repository.getAccountRepository().getMintingAccounts();
for (MintingAccountData mintingAccount : mintingAccounts) {
repository.getAccountRepository().delete(mintingAccount.getPrivateKey());
}
repository.saveChanges();
LOGGER.info("Deleting peers list...");
repository.getNetworkRepository().deleteAllPeers();
repository.saveChanges();
LOGGER.info("Creating bootstrap...");
// Timeout if the database isn't ready for backing up after 10 seconds
long timeout = 10 * 1000L;
repository.backup(false, "bootstrap", timeout);
LOGGER.info("Moving files to output directory...");
inputPath = Paths.get(Settings.getInstance().getRepositoryPath(), "bootstrap");
outputPath = Paths.get(this.createTempDirectory().toString(), "bootstrap");
// Move the db backup to a "bootstrap" folder in the root directory
Files.move(inputPath, outputPath, REPLACE_EXISTING);
// If in archive mode, copy the archive folder to inside the bootstrap folder
if (!Settings.getInstance().isTopOnly() && Settings.getInstance().isArchiveEnabled()) {
FileUtils.copyDirectory(
Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toFile(),
Paths.get(outputPath.toString(), "archive").toFile()
);
}
LOGGER.info("Preparing output path...");
Path compressedOutputPath = this.getBootstrapOutputPath();
try {
Files.delete(compressedOutputPath);
} catch (NoSuchFileException e) {
// Doesn't exist, so no need to delete
}
LOGGER.info("Compressing...");
SevenZ.compress(compressedOutputPath.toString(), outputPath.toFile());
LOGGER.info("Generating checksum file...");
String checksum = Crypto.digestHexString(compressedOutputPath.toFile(), 1024*1024);
Path checksumPath = Paths.get(String.format("%s.sha256", compressedOutputPath.toString()));
Files.writeString(checksumPath, checksum, StandardOpenOption.CREATE);
// Return the path to the compressed bootstrap file
LOGGER.info("Bootstrap creation complete. Output file: {}", compressedOutputPath.toAbsolutePath().toString());
return compressedOutputPath.toAbsolutePath().toString();
}
catch (TimeoutException e) {
throw new DataException(String.format("Unable to create bootstrap due to timeout: %s", e.getMessage()));
}
finally {
try {
LOGGER.info("Re-importing local data...");
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
repository.importDataFromFile(Paths.get(exportPath.toString(), "TradeBotStates.json").toString());
repository.importDataFromFile(Paths.get(exportPath.toString(), "MintingAccounts.json").toString());
repository.saveChanges();
} catch (IOException e) {
LOGGER.info("Unable to re-import local data, but created bootstrap is still valid. {}", e);
}
LOGGER.info("Unlocking blockchain...");
blockchainLock.unlock();
// Cleanup
LOGGER.info("Cleaning up...");
Thread.sleep(5000L);
this.deleteAllTempDirectories();
}
}
public void startImport() throws InterruptedException {
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
this.repository = repository;
this.updateStatus("Starting import of bootstrap...");
this.doImport();
break;
} catch (DataException e) {
LOGGER.info("Bootstrap import failed", e);
this.updateStatus(String.format("Bootstrapping failed. Retrying in %d minutes...", retryMinutes));
Thread.sleep(retryMinutes * 60 * 1000L);
retryMinutes *= 2;
}
}
}
private void doImport() throws DataException {
Path path = null;
try {
Path tempDir = this.createTempDirectory();
String filename = String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), this.getFilename());
path = Paths.get(tempDir.toString(), filename);
this.downloadToPath(path);
this.importFromPath(path);
} catch (InterruptedException | DataException | IOException e) {
throw new DataException("Unable to import bootstrap", e);
}
finally {
if (path != null) {
try {
Files.delete(path);
} catch (IOException e) {
// Temp folder will be cleaned up below, so ignore this failure
}
}
this.deleteAllTempDirectories();
}
}
private String getFilename() {
boolean isTopOnly = Settings.getInstance().isTopOnly();
boolean archiveEnabled = Settings.getInstance().isArchiveEnabled();
boolean isTestnet = Settings.getInstance().isTestNet();
String prefix = isTestnet ? "testnet-" : "";
if (isTopOnly) {
return prefix.concat("bootstrap-toponly.7z");
}
else if (archiveEnabled) {
return prefix.concat("bootstrap-archive.7z");
}
else {
return prefix.concat("bootstrap-full.7z");
}
}
private void downloadToPath(Path path) throws DataException {
String bootstrapHost = this.getRandomHost();
String bootstrapFilename = this.getFilename();
String bootstrapUrl = String.format("%s/%s", bootstrapHost, bootstrapFilename);
String type = Settings.getInstance().isTopOnly() ? "top-only" : "full node";
SplashFrame.getInstance().updateStatus(String.format("Downloading %s bootstrap...", type));
LOGGER.info(String.format("Downloading %s bootstrap from %s ...", type, bootstrapUrl));
// Delete an existing file if it exists
try {
Files.delete(path);
} catch (IOException e) {
// No need to do anything
}
// Get the total file size
URL url;
long fileSize;
try {
url = new URL(bootstrapUrl);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("HEAD");
connection.connect();
fileSize = connection.getContentLengthLong();
connection.disconnect();
} catch (MalformedURLException e) {
throw new DataException(String.format("Malformed URL when downloading bootstrap: %s", e.getMessage()));
} catch (IOException e) {
throw new DataException(String.format("Unable to get bootstrap file size from %s. " +
"Please check your internet connection.", e.getMessage()));
}
// Download the file and update the status with progress
try (BufferedInputStream in = new BufferedInputStream(url.openStream());
FileOutputStream fileOutputStream = new FileOutputStream(path.toFile())) {
byte[] buffer = new byte[1024 * 1024];
long downloaded = 0;
int bytesRead;
while ((bytesRead = in.read(buffer, 0, 1024)) != -1) {
fileOutputStream.write(buffer, 0, bytesRead);
downloaded += bytesRead;
if (fileSize > 0) {
int progress = (int)((double)downloaded / (double)fileSize * 100);
SplashFrame.getInstance().updateStatus(String.format("Downloading %s bootstrap... (%d%%)", type, progress));
}
}
} catch (IOException e) {
throw new DataException(String.format("Unable to download bootstrap: %s", e.getMessage()));
}
}
public String getRandomHost() {
// Select a random host from bootstrapHosts
String[] hosts = Settings.getInstance().getBootstrapHosts();
int index = new SecureRandom().nextInt(hosts.length);
String bootstrapHost = hosts[index];
return bootstrapHost;
}
public void importFromPath(Path path) throws InterruptedException, DataException, IOException {
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
blockchainLock.lockInterruptibly();
try {
this.updateStatus("Stopping repository...");
// Close the repository while we are still able to
// Otherwise, the caller will run into difficulties when it tries to close it
repository.discardChanges();
repository.close();
// Now close the repository factory so that we can swap out the database files
RepositoryManager.closeRepositoryFactory();
this.updateStatus("Deleting existing repository...");
Path input = path.toAbsolutePath();
Path output = path.toAbsolutePath().getParent().toAbsolutePath();
Path inputPath = Paths.get(output.toString(), "bootstrap");
Path outputPath = Paths.get(Settings.getInstance().getRepositoryPath());
FileUtils.deleteDirectory(outputPath.toFile());
this.updateStatus("Extracting bootstrap...");
SevenZ.decompress(input.toString(), output.toFile());
if (!inputPath.toFile().exists()) {
throw new DataException("Extracted bootstrap doesn't exist");
}
// Move the "bootstrap" folder in place of the "db" folder
this.updateStatus("Moving files to output directory...");
Files.move(inputPath, outputPath);
this.updateStatus("Starting repository from bootstrap...");
}
finally {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(Controller.getRepositoryUrl());
RepositoryManager.setRepositoryFactory(repositoryFactory);
blockchainLock.unlock();
}
}
private Path createTempDirectory() throws IOException {
Path initialPath = Paths.get(Settings.getInstance().getRepositoryPath()).toAbsolutePath().getParent();
String baseDir = Paths.get(initialPath.toString(), "tmp").toFile().getCanonicalPath();
String identifier = UUID.randomUUID().toString();
Path tempDir = Paths.get(baseDir, identifier);
Files.createDirectories(tempDir);
return tempDir;
}
private void deleteAllTempDirectories() {
Path initialPath = Paths.get(Settings.getInstance().getRepositoryPath()).toAbsolutePath().getParent();
Path path = Paths.get(initialPath.toString(), "tmp");
try {
FileUtils.deleteDirectory(path.toFile());
} catch (IOException e) {
LOGGER.info("Unable to delete temp directory path: {}", path.toString(), e);
}
}
public Path getBootstrapOutputPath() {
Path initialPath = Paths.get(Settings.getInstance().getRepositoryPath()).toAbsolutePath().getParent();
String compressedFilename = String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), this.getFilename());
Path compressedOutputPath = Paths.get(initialPath.toString(), compressedFilename);
return compressedOutputPath;
}
private void updateStatus(String text) {
LOGGER.info(text);
SplashFrame.getInstance().updateStatus(text);
}
}

View File

@@ -1,5 +1,8 @@
package org.qortal.repository;
import java.io.IOException;
import java.util.concurrent.TimeoutException;
public interface Repository extends AutoCloseable {
public ATRepository getATRepository();
@@ -12,6 +15,8 @@ public interface Repository extends AutoCloseable {
public BlockRepository getBlockRepository();
public BlockArchiveRepository getBlockArchiveRepository();
public ChatRepository getChatRepository();
public CrossChainRepository getCrossChainRepository();
@@ -45,14 +50,16 @@ public interface Repository extends AutoCloseable {
public void setDebug(boolean debugState);
public void backup(boolean quick) throws DataException;
public void backup(boolean quick, String name, Long timeout) throws DataException, TimeoutException;
public void performPeriodicMaintenance() throws DataException;
public void performPeriodicMaintenance(Long timeout) throws DataException, TimeoutException;
public void exportNodeLocalData() throws DataException;
public void importDataFromFile(String filename) throws DataException;
public void importDataFromFile(String filename) throws DataException, IOException;
public void checkConsistency() throws DataException;
public static void attemptRecovery(String connectionUrl, String name) throws DataException {}
}

View File

@@ -1,8 +1,18 @@
package org.qortal.repository;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.gui.SplashFrame;
import org.qortal.repository.hsqldb.HSQLDBDatabaseArchiving;
import org.qortal.repository.hsqldb.HSQLDBDatabasePruning;
import org.qortal.repository.hsqldb.HSQLDBRepository;
import org.qortal.settings.Settings;
import java.sql.SQLException;
import java.util.concurrent.TimeoutException;
public abstract class RepositoryManager {
private static final Logger LOGGER = LogManager.getLogger(RepositoryManager.class);
private static RepositoryFactory repositoryFactory = null;
@@ -43,14 +53,60 @@ public abstract class RepositoryManager {
repositoryFactory = null;
}
public static void backup(boolean quick) {
public static void backup(boolean quick, String name, Long timeout) throws TimeoutException {
try (final Repository repository = getRepository()) {
repository.backup(quick);
repository.backup(quick, name, timeout);
} catch (DataException e) {
// Backup is best-effort so don't complain
}
}
public static boolean archive(Repository repository) {
// Bulk archive the database the first time we use archive mode
if (Settings.getInstance().isArchiveEnabled()) {
if (RepositoryManager.canArchiveOrPrune()) {
try {
return HSQLDBDatabaseArchiving.buildBlockArchive(repository, BlockArchiveWriter.DEFAULT_FILE_SIZE_TARGET);
} catch (DataException e) {
LOGGER.info("Unable to build block archive. The database may have been left in an inconsistent state.");
}
}
else {
LOGGER.info("Unable to build block archive due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
LOGGER.info("To bootstrap, stop the core and delete the db folder, then start the core again.");
SplashFrame.getInstance().updateStatus("Missing index. Bootstrapping is recommended.");
}
}
return false;
}
public static boolean prune(Repository repository) {
// Bulk prune the database the first time we use top-only or block archive mode
if (Settings.getInstance().isTopOnly() ||
Settings.getInstance().isArchiveEnabled()) {
if (RepositoryManager.canArchiveOrPrune()) {
try {
boolean prunedATStates = HSQLDBDatabasePruning.pruneATStates((HSQLDBRepository) repository);
boolean prunedBlocks = HSQLDBDatabasePruning.pruneBlocks((HSQLDBRepository) repository);
// Perform repository maintenance to shrink the db size down
if (prunedATStates && prunedBlocks) {
HSQLDBDatabasePruning.performMaintenance(repository);
return true;
}
} catch (SQLException | DataException e) {
LOGGER.info("Unable to bulk prune AT states. The database may have been left in an inconsistent state.");
}
}
else {
LOGGER.info("Unable to prune blocks due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
}
}
return false;
}
public static void setRequestedCheckpoint(Boolean quick) {
quickCheckpointRequested = quick;
}
@@ -77,4 +133,12 @@ public abstract class RepositoryManager {
return SQLException.class.isInstance(cause) && repositoryFactory.isDeadlockException((SQLException) cause);
}
public static boolean canArchiveOrPrune() {
try (final Repository repository = getRepository()) {
return repository.getATRepository().hasAtStatesHeightIndex();
} catch (DataException e) {
return false;
}
}
}

View File

@@ -8,6 +8,7 @@ import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.repository.ATRepository;
@@ -600,6 +601,44 @@ public class HSQLDBATRepository implements ATRepository {
return atStates;
}
@Override
public void rebuildLatestAtStates() throws DataException {
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
LOGGER.trace("Rebuilding latest AT states...");
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
this.repository.saveChanges();
LOGGER.trace("Rebuilt latest AT states");
}
}
@Override
public int getAtTrimHeight() throws DataException {
String sql = "SELECT AT_trim_height FROM DatabaseInfo";
@@ -625,63 +664,153 @@ public class HSQLDBATRepository implements ATRepository {
this.repository.executeCheckedUpdate(updateSql, trimHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
this.repository.examineException(e);
throw new DataException("Unable to set AT state trim height in repository", e);
}
}
}
@Override
public void prepareForAtStateTrimming() throws DataException {
// Rebuild cache of latest AT states that we can't trim
String deleteSql = "DELETE FROM LatestATStates";
try {
this.repository.executeCheckedUpdate(deleteSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to delete temporary latest AT states cache from repository", e);
}
String insertSql = "INSERT INTO LatestATStates ("
+ "SELECT AT_address, height FROM ATs "
+ "CROSS JOIN LATERAL("
+ "SELECT height FROM ATStates "
+ "WHERE ATStates.AT_address = ATs.AT_address "
+ "ORDER BY AT_address DESC, height DESC LIMIT 1"
+ ") "
+ ")";
try {
this.repository.executeCheckedUpdate(insertSql);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to populate temporary latest AT states cache in repository", e);
}
}
@Override
public int trimAtStates(int minHeight, int maxHeight, int limit) throws DataException {
if (minHeight >= maxHeight)
return 0;
// We're often called so no need to trim all states in one go.
// Limit updates to reduce CPU and memory load.
String sql = "DELETE FROM ATStatesData "
+ "WHERE height BETWEEN ? AND ? "
+ "AND NOT EXISTS("
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
// We're often called so no need to trim all states in one go.
// Limit updates to reduce CPU and memory load.
String sql = "DELETE FROM ATStatesData "
+ "WHERE height BETWEEN ? AND ? "
+ "AND NOT EXISTS("
+ "SELECT TRUE FROM LatestATStates "
+ "WHERE LatestATStates.AT_address = ATStatesData.AT_address "
+ "AND LatestATStates.height = ATStatesData.height"
+ ") "
+ "LIMIT ?";
+ ") "
+ "LIMIT ?";
try {
return this.repository.executeCheckedUpdate(sql, minHeight, maxHeight, limit);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to trim AT states in repository", e);
try {
int modifiedRows = this.repository.executeCheckedUpdate(sql, minHeight, maxHeight, limit);
this.repository.saveChanges();
return modifiedRows;
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to trim AT states in repository", e);
}
}
}
@Override
public int getAtPruneHeight() throws DataException {
String sql = "SELECT AT_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch AT state prune height from repository", e);
}
}
@Override
public void setAtPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET AT_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set AT state prune height in repository", e);
}
}
}
@Override
public int pruneAtStates(int minHeight, int maxHeight) throws DataException {
// latestATStatesLock is to prevent concurrent updates on LatestATStates
// that could result in one process using a partial or empty dataset
// because it was in the process of being rebuilt by another thread
synchronized (this.repository.latestATStatesLock) {
int deletedCount = 0;
for (int height = minHeight; height <= maxHeight; height++) {
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
// Get latest AT states for this height
List<String> atAddresses = new ArrayList<>();
String updateSql = "SELECT AT_address FROM LatestATStates WHERE height = ?";
try (ResultSet resultSet = this.repository.checkedExecute(updateSql, height)) {
if (resultSet != null) {
do {
String atAddress = resultSet.getString(1);
atAddresses.add(atAddress);
} while (resultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to fetch latest AT states from repository", e);
}
List<ATStateData> atStates = this.getBlockATStatesAtHeight(height);
for (ATStateData atState : atStates) {
//LOGGER.info("Found atState {} at height {}", atState.getATAddress(), atState.getHeight());
// Give up if we're stopping
if (Controller.isStopping()) {
return deletedCount;
}
if (atAddresses.contains(atState.getATAddress())) {
// We don't want to delete this AT state because it is still active
LOGGER.trace("Skipping atState {} at height {}", atState.getATAddress(), atState.getHeight());
continue;
}
// Safe to delete everything else for this height
try {
this.repository.delete("ATStates", "AT_address = ? AND height = ?",
atState.getATAddress(), atState.getHeight());
deletedCount++;
} catch (SQLException e) {
throw new DataException("Unable to delete AT state data from repository", e);
}
}
}
this.repository.saveChanges();
return deletedCount;
}
}
@Override
public boolean hasAtStatesHeightIndex() throws DataException {
String sql = "SELECT INDEX_NAME FROM INFORMATION_SCHEMA.SYSTEM_INDEXINFO where INDEX_NAME='ATSTATESHEIGHTINDEX'";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
return resultSet != null;
} catch (SQLException e) {
throw new DataException("Unable to check for ATStatesHeightIndex in repository", e);
}
}
@Override
public void save(ATStateData atStateData) throws DataException {
// We shouldn't ever save partial ATStateData

View File

@@ -904,6 +904,25 @@ public class HSQLDBAccountRepository implements AccountRepository {
}
}
@Override
public MintingAccountData getMintingAccount(byte[] mintingAccountKey) throws DataException {
try (ResultSet resultSet = this.repository.checkedExecute("SELECT minter_private_key, minter_public_key " +
"FROM MintingAccounts WHERE minter_private_key = ? OR minter_public_key = ?",
mintingAccountKey, mintingAccountKey)) {
if (resultSet == null)
return null;
byte[] minterPrivateKey = resultSet.getBytes(1);
byte[] minterPublicKey = resultSet.getBytes(2);
return new MintingAccountData(minterPrivateKey, minterPublicKey);
} catch (SQLException e) {
throw new DataException("Unable to fetch minting accounts from repository", e);
}
}
@Override
public void save(MintingAccountData mintingAccountData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("MintingAccounts");

View File

@@ -0,0 +1,296 @@
package org.qortal.repository.hsqldb;
import org.qortal.api.ApiError;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.model.BlockSignerSummary;
import org.qortal.block.Block;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.BlockArchiveRepository;
import org.qortal.repository.DataException;
import org.qortal.utils.Triple;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class HSQLDBBlockArchiveRepository implements BlockArchiveRepository {
protected HSQLDBRepository repository;
public HSQLDBBlockArchiveRepository(HSQLDBRepository repository) {
this.repository = repository;
}
@Override
public BlockData fromSignature(byte[] signature) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockWithSignature(signature, this.repository);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
}
@Override
public int getHeightFromSignature(byte[] signature) throws DataException {
Integer height = BlockArchiveReader.getInstance().fetchHeightForSignature(signature, this.repository);
if (height == null || height == 0) {
return 0;
}
return height;
}
@Override
public BlockData fromHeight(int height) throws DataException {
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
return null;
}
@Override
public List<BlockData> fromRange(int startHeight, int endHeight) throws DataException {
List<BlockData> blocks = new ArrayList<>();
for (int height = startHeight; height < endHeight; height++) {
BlockData blockData = this.fromHeight(height);
if (blockData == null) {
return blocks;
}
blocks.add(blockData);
}
return blocks;
}
@Override
public BlockData fromReference(byte[] reference) throws DataException {
BlockData referenceBlock = this.repository.getBlockArchiveRepository().fromSignature(reference);
if (referenceBlock == null) {
// Try the main block repository. Needed for genesis block.
referenceBlock = this.repository.getBlockRepository().fromSignature(reference);
}
if (referenceBlock != null) {
int height = referenceBlock.getHeight();
if (height > 0) {
// Request the block at height + 1
Triple blockInfo = BlockArchiveReader.getInstance().fetchBlockAtHeight(height + 1);
if (blockInfo != null) {
return (BlockData) blockInfo.getA();
}
}
}
return null;
}
@Override
public int getHeightFromTimestamp(long timestamp) throws DataException {
String sql = "SELECT height FROM BlockArchive WHERE minted_when <= ? ORDER BY minted_when DESC, height DESC LIMIT 1";
try (ResultSet resultSet = this.repository.checkedExecute(sql, timestamp)) {
if (resultSet == null) {
return 0;
}
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Error fetching height from BlockArchive repository", e);
}
}
@Override
public List<BlockSummaryData> getBlockSummariesBySigner(byte[] signerPublicKey, Integer limit, Integer offset, Boolean reverse) throws DataException {
StringBuilder sql = new StringBuilder(512);
sql.append("SELECT signature, height, BlockArchive.minter FROM ");
// List of minter account's public key and reward-share public keys with minter's public key
sql.append("(SELECT * FROM (VALUES (CAST(? AS QortalPublicKey))) UNION (SELECT reward_share_public_key FROM RewardShares WHERE minter_public_key = ?)) AS PublicKeys (public_key) ");
// Match BlockArchive blocks signed with public key from above list
sql.append("JOIN BlockArchive ON BlockArchive.minter = public_key ");
sql.append("ORDER BY BlockArchive.height ");
if (reverse != null && reverse)
sql.append("DESC ");
HSQLDBRepository.limitOffsetSql(sql, limit, offset);
List<BlockSummaryData> blockSummaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), signerPublicKey, signerPublicKey)) {
if (resultSet == null)
return blockSummaries;
do {
byte[] signature = resultSet.getBytes(1);
int height = resultSet.getInt(2);
byte[] blockMinterPublicKey = resultSet.getBytes(3);
// Fetch additional info from the archive itself
int onlineAccountsCount = 0;
BlockData blockData = this.fromSignature(signature);
if (blockData != null) {
onlineAccountsCount = blockData.getOnlineAccountsCount();
}
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, blockMinterPublicKey, onlineAccountsCount);
blockSummaries.add(blockSummary);
} while (resultSet.next());
return blockSummaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch minter's block summaries from repository", e);
}
}
@Override
public List<BlockSignerSummary> getBlockSigners(List<String> addresses, Integer limit, Integer offset, Boolean reverse) throws DataException {
String subquerySql = "SELECT minter, COUNT(signature) FROM (" +
"(SELECT minter, signature FROM Blocks) UNION ALL (SELECT minter, signature FROM BlockArchive)" +
") GROUP BY minter";
StringBuilder sql = new StringBuilder(1024);
sql.append("SELECT DISTINCT block_minter, n_blocks, minter_public_key, minter, recipient FROM (");
sql.append(subquerySql);
sql.append(") AS Minters (block_minter, n_blocks) LEFT OUTER JOIN RewardShares ON reward_share_public_key = block_minter ");
if (addresses != null && !addresses.isEmpty()) {
sql.append(" LEFT OUTER JOIN Accounts AS BlockMinterAccounts ON BlockMinterAccounts.public_key = block_minter ");
sql.append(" LEFT OUTER JOIN Accounts AS RewardShareMinterAccounts ON RewardShareMinterAccounts.public_key = minter_public_key ");
sql.append(" JOIN (VALUES ");
final int addressesSize = addresses.size();
for (int ai = 0; ai < addressesSize; ++ai) {
if (ai != 0)
sql.append(", ");
sql.append("(?)");
}
sql.append(") AS FilterAccounts (account) ");
sql.append(" ON FilterAccounts.account IN (recipient, BlockMinterAccounts.account, RewardShareMinterAccounts.account) ");
} else {
addresses = Collections.emptyList();
}
sql.append("ORDER BY n_blocks ");
if (reverse != null && reverse)
sql.append("DESC ");
HSQLDBRepository.limitOffsetSql(sql, limit, offset);
List<BlockSignerSummary> summaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), addresses.toArray())) {
if (resultSet == null)
return summaries;
do {
byte[] blockMinterPublicKey = resultSet.getBytes(1);
int nBlocks = resultSet.getInt(2);
// May not be present if no reward-share:
byte[] mintingAccountPublicKey = resultSet.getBytes(3);
String minterAccount = resultSet.getString(4);
String recipientAccount = resultSet.getString(5);
BlockSignerSummary blockSignerSummary;
if (recipientAccount == null)
blockSignerSummary = new BlockSignerSummary(blockMinterPublicKey, nBlocks);
else
blockSignerSummary = new BlockSignerSummary(blockMinterPublicKey, nBlocks, mintingAccountPublicKey, minterAccount, recipientAccount);
summaries.add(blockSignerSummary);
} while (resultSet.next());
return summaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch block minters from repository", e);
}
}
@Override
public int getBlockArchiveHeight() throws DataException {
String sql = "SELECT block_archive_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch block archive height from repository", e);
}
}
@Override
public void setBlockArchiveHeight(int archiveHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET block_archive_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, archiveHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set block archive height in repository", e);
}
}
}
@Override
public BlockArchiveData getBlockArchiveDataForSignature(byte[] signature) throws DataException {
String sql = "SELECT height, signature, minted_when, minter FROM BlockArchive WHERE signature = ? LIMIT 1";
try (ResultSet resultSet = this.repository.checkedExecute(sql, signature)) {
if (resultSet == null) {
return null;
}
int height = resultSet.getInt(1);
byte[] sig = resultSet.getBytes(2);
long timestamp = resultSet.getLong(3);
byte[] minterPublicKey = resultSet.getBytes(4);
return new BlockArchiveData(sig, height, timestamp, minterPublicKey);
} catch (SQLException e) {
throw new DataException("Error fetching height from BlockArchive repository", e);
}
}
@Override
public void save(BlockArchiveData blockArchiveData) throws DataException {
HSQLDBSaver saveHelper = new HSQLDBSaver("BlockArchive");
saveHelper.bind("signature", blockArchiveData.getSignature())
.bind("height", blockArchiveData.getHeight())
.bind("minted_when", blockArchiveData.getTimestamp())
.bind("minter", blockArchiveData.getMinterPublicKey());
try {
saveHelper.execute(this.repository);
} catch (SQLException e) {
throw new DataException("Unable to save SimpleBlockData into BlockArchive repository", e);
}
}
@Override
public void delete(BlockArchiveData blockArchiveData) throws DataException {
try {
this.repository.delete("BlockArchive",
"block_signature = ?", blockArchiveData.getSignature());
} catch (SQLException e) {
throw new DataException("Unable to delete SimpleBlockData from BlockArchive repository", e);
}
}
}

View File

@@ -10,6 +10,7 @@ import org.qortal.api.model.BlockSignerSummary;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.BlockTransactionData;
import org.qortal.data.block.BlockArchiveData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.DataException;
@@ -382,86 +383,6 @@ public class HSQLDBBlockRepository implements BlockRepository {
}
}
@Override
public List<BlockSummaryData> getBlockSummaries(Integer startHeight, Integer endHeight, Integer count) throws DataException {
StringBuilder sql = new StringBuilder(512);
List<Object> bindParams = new ArrayList<>();
sql.append("SELECT signature, height, minter, online_accounts_count, minted_when, transaction_count ");
/*
* start end count result
* 10 40 null blocks 10 to 39 (excludes end block, ignore count)
*
* null null null blocks 1 to 50 (assume count=50, maybe start=1)
* 30 null null blocks 30 to 79 (assume count=50)
* 30 null 10 blocks 30 to 39
*
* null null 50 last 50 blocks? so if max(blocks.height) is 200, then blocks 151 to 200
* null 200 null blocks 150 to 199 (excludes end block, assume count=50)
* null 200 10 blocks 190 to 199 (excludes end block)
*/
if (startHeight != null && endHeight != null) {
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(startHeight);
bindParams.add(Integer.valueOf(endHeight - 1));
} else if (endHeight != null || (startHeight == null && count != null)) {
// we are going to return blocks from the end of the chain
if (count == null)
count = 50;
if (endHeight == null) {
sql.append("FROM (SELECT height FROM Blocks ORDER BY height DESC LIMIT 1) AS MaxHeights (max_height) ");
sql.append("JOIN Blocks ON height BETWEEN (max_height - ? + 1) AND max_height ");
bindParams.add(count);
} else {
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(Integer.valueOf(endHeight - count));
bindParams.add(Integer.valueOf(endHeight - 1));
}
} else {
// we are going to return blocks from the start of the chain
if (startHeight == null)
startHeight = 1;
if (count == null)
count = 50;
sql.append("FROM Blocks ");
sql.append("WHERE height BETWEEN ? AND ?");
bindParams.add(startHeight);
bindParams.add(Integer.valueOf(startHeight + count - 1));
}
List<BlockSummaryData> blockSummaries = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql.toString(), bindParams.toArray())) {
if (resultSet == null)
return blockSummaries;
do {
byte[] signature = resultSet.getBytes(1);
int height = resultSet.getInt(2);
byte[] minterPublicKey = resultSet.getBytes(3);
int onlineAccountsCount = resultSet.getInt(4);
long timestamp = resultSet.getLong(5);
int transactionCount = resultSet.getInt(6);
BlockSummaryData blockSummary = new BlockSummaryData(height, signature, minterPublicKey, onlineAccountsCount,
timestamp, transactionCount);
blockSummaries.add(blockSummary);
} while (resultSet.next());
return blockSummaries;
} catch (SQLException e) {
throw new DataException("Unable to fetch height-ranged block summaries from repository", e);
}
}
@Override
public int getOnlineAccountsSignaturesTrimHeight() throws DataException {
String sql = "SELECT online_signatures_trim_height FROM DatabaseInfo";
@@ -509,6 +430,53 @@ public class HSQLDBBlockRepository implements BlockRepository {
}
}
@Override
public int getBlockPruneHeight() throws DataException {
String sql = "SELECT block_prune_height FROM DatabaseInfo";
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return 0;
return resultSet.getInt(1);
} catch (SQLException e) {
throw new DataException("Unable to fetch block prune height from repository", e);
}
}
@Override
public void setBlockPruneHeight(int pruneHeight) throws DataException {
// trimHeightsLock is to prevent concurrent update on DatabaseInfo
// that could result in "transaction rollback: serialization failure"
synchronized (this.repository.trimHeightsLock) {
String updateSql = "UPDATE DatabaseInfo SET block_prune_height = ?";
try {
this.repository.executeCheckedUpdate(updateSql, pruneHeight);
this.repository.saveChanges();
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to set block prune height in repository", e);
}
}
}
@Override
public int pruneBlocks(int minHeight, int maxHeight) throws DataException {
// Don't prune the genesis block
if (minHeight <= 1) {
minHeight = 2;
}
try {
return this.repository.delete("Blocks", "height BETWEEN ? AND ?", minHeight, maxHeight);
} catch (SQLException e) {
throw new DataException("Unable to prune blocks from repository", e);
}
}
@Override
public BlockData getDetachedBlockSignature(int startHeight) throws DataException {
String sql = "SELECT " + BLOCK_DB_COLUMNS + " FROM Blocks "

View File

@@ -0,0 +1,88 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.gui.SplashFrame;
import org.qortal.repository.BlockArchiveWriter;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.transform.TransformationException;
import java.io.IOException;
/**
*
* When switching to an archiving node, we need to archive most of the database contents.
* This involves copying its data into flat files.
* If we do this entirely as a background process, it is very slow and can interfere with syncing.
* However, if we take the approach of doing this in bulk, before starting up the rest of the
* processes, this makes it much faster and less invasive.
*
* From that point, the original background archiving process will run, but can be dialled right down
* so not to interfere with syncing.
*
*/
public class HSQLDBDatabaseArchiving {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBDatabaseArchiving.class);
public static boolean buildBlockArchive(Repository repository, long fileSizeTarget) throws DataException {
// Only build the archive if we haven't already got one that is up to date
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (upToDate) {
// Already archived
return false;
}
LOGGER.info("Building block archive - this process could take a while... (approx. 15 mins on high spec)");
SplashFrame.getInstance().updateStatus("Building block archive (takes 60+ mins)...");
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
int startHeight = 0;
while (!Controller.isStopping()) {
try {
BlockArchiveWriter writer = new BlockArchiveWriter(startHeight, maximumArchiveHeight, repository);
writer.setFileSizeTarget(fileSizeTarget);
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
switch (result) {
case OK:
// Increment block archive height
startHeight = writer.getLastWrittenHeight() + 1;
repository.getBlockArchiveRepository().setBlockArchiveHeight(startHeight);
repository.saveChanges();
break;
case STOPPING:
return false;
case NOT_ENOUGH_BLOCKS:
// We've reached the limit of the blocks we can archive
// Return from the whole method
return true;
case BLOCK_NOT_FOUND:
// We tried to archive a block that didn't exist. This is a major failure and likely means
// that a bootstrap or re-sync is needed. Return rom the method
LOGGER.info("Error: block not found when building archive. If this error persists, " +
"a bootstrap or re-sync may be needed.");
return false;
}
} catch (IOException | TransformationException | InterruptedException e) {
LOGGER.info("Caught exception when creating block cache", e);
return false;
}
}
// If we got this far then something went wrong (most likely the app is stopping)
return false;
}
}

View File

@@ -0,0 +1,332 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.gui.SplashFrame;
import org.qortal.repository.BlockArchiveWriter;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.concurrent.TimeoutException;
/**
*
* When switching from a full node to a pruning node, we need to delete most of the database contents.
* If we do this entirely as a background process, it is very slow and can interfere with syncing.
* However, if we take the approach of transferring only the necessary rows to a new table and then
* deleting the original table, this makes the process much faster. It was taking several days to
* delete the AT states in the background, but only a couple of minutes to copy them to a new table.
*
* The trade off is that we have to go through a form of "reshape" when starting the app for the first
* time after enabling pruning mode. But given that this is an opt-in mode, I don't think it will be
* a problem.
*
* Once the pruning is complete, it automatically performs a CHECKPOINT DEFRAG in order to
* shrink the database file size down to a fraction of what it was before.
*
* From this point, the original background process will run, but can be dialled right down so not
* to interfere with syncing.
*
*/
public class HSQLDBDatabasePruning {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBDatabasePruning.class);
public static boolean pruneATStates(HSQLDBRepository repository) throws SQLException, DataException {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getATRepository().getAtPruneHeight();
if (pruneHeight > 0) {
// Already pruned AT states
return false;
}
if (Settings.getInstance().isArchiveEnabled()) {
// Only proceed if we can see that the archiver has already finished
// This way, if the archiver failed for any reason, we can prune once it has had
// some opportunities to try again
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (!upToDate) {
return false;
}
}
LOGGER.info("Starting bulk prune of AT states - this process could take a while... " +
"(approx. 2 mins on high spec, or upwards of 30 mins in some cases)");
SplashFrame.getInstance().updateStatus("Pruning database (takes up to 30 mins)...");
// Create new AT-states table to hold smaller dataset
repository.executeCheckedUpdate("DROP TABLE IF EXISTS ATStatesNew");
repository.executeCheckedUpdate("CREATE TABLE ATStatesNew ("
+ "AT_address QortalAddress, height INTEGER NOT NULL, state_hash ATStateHash NOT NULL, "
+ "fees QortalAmount NOT NULL, is_initial BOOLEAN NOT NULL, sleep_until_message_timestamp BIGINT, "
+ "PRIMARY KEY (AT_address, height), "
+ "FOREIGN KEY (AT_address) REFERENCES ATs (AT_address) ON DELETE CASCADE)");
repository.executeCheckedUpdate("SET TABLE ATStatesNew NEW SPACE");
repository.executeCheckedUpdate("CHECKPOINT");
// Add a height index
LOGGER.info("Adding index to AT states table...");
repository.executeCheckedUpdate("CREATE INDEX IF NOT EXISTS ATStatesNewHeightIndex ON ATStatesNew (height)");
repository.executeCheckedUpdate("CHECKPOINT");
// Find our latest block
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
// Calculate some constants for later use
final int blockchainHeight = latestBlock.getHeight();
int maximumBlockToTrim = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
if (Settings.getInstance().isArchiveEnabled()) {
// Archive mode - don't prune anything that hasn't been archived yet
maximumBlockToTrim = Math.min(maximumBlockToTrim, repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1);
}
final int endHeight = blockchainHeight;
final int blockStep = 10000;
// It's essential that we rebuild the latest AT states here, as we are using this data in the next query.
// Failing to do this will result in important AT states being deleted, rendering the database unusable.
repository.getATRepository().rebuildLatestAtStates();
// Loop through all the LatestATStates and copy them to the new table
LOGGER.info("Copying AT states...");
for (int height = 0; height < endHeight; height += blockStep) {
final int batchEndHeight = height + blockStep - 1;
//LOGGER.info(String.format("Copying AT states between %d and %d...", height, batchEndHeight));
String sql = "SELECT height, AT_address FROM LatestATStates WHERE height BETWEEN ? AND ?";
try (ResultSet latestAtStatesResultSet = repository.checkedExecute(sql, height, batchEndHeight)) {
if (latestAtStatesResultSet != null) {
do {
int latestAtHeight = latestAtStatesResultSet.getInt(1);
String latestAtAddress = latestAtStatesResultSet.getString(2);
// Copy this latest ATState to the new table
//LOGGER.info(String.format("Copying AT %s at height %d...", latestAtAddress, latestAtHeight));
try {
String updateSql = "INSERT INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, latestAtHeight, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
// If this batch includes blocks after the maximum block to trim, we will need to copy
// each of its AT states above maximumBlockToTrim as they are considered "recent". We
// need to do this for _all_ AT states in these blocks, regardless of their latest state.
if (batchEndHeight >= maximumBlockToTrim) {
// Now copy this AT's states for each recent block they are present in
for (int i = maximumBlockToTrim; i < endHeight; i++) {
if (latestAtHeight < i) {
// This AT finished before this block so there is nothing to copy
continue;
}
//LOGGER.info(String.format("Copying recent AT %s at height %d...", latestAtAddress, i));
try {
// Copy each LatestATState to the new table
String updateSql = "INSERT IGNORE INTO ATStatesNew ("
+ "SELECT AT_address, height, state_hash, fees, is_initial, sleep_until_message_timestamp "
+ "FROM ATStates "
+ "WHERE height = ? AND AT_address = ?)";
repository.executeCheckedUpdate(updateSql, i, latestAtAddress);
} catch (SQLException e) {
repository.examineException(e);
throw new DataException("Unable to copy ATStates", e);
}
}
}
repository.saveChanges();
} while (latestAtStatesResultSet.next());
}
} catch (SQLException e) {
throw new DataException("Unable to copy AT states", e);
}
}
// Finally, drop the original table and rename
LOGGER.info("Deleting old AT states...");
repository.executeCheckedUpdate("DROP TABLE ATStates");
repository.executeCheckedUpdate("ALTER TABLE ATStatesNew RENAME TO ATStates");
repository.executeCheckedUpdate("ALTER INDEX ATStatesNewHeightIndex RENAME TO ATStatesHeightIndex");
repository.executeCheckedUpdate("CHECKPOINT");
// Update the prune height
int nextPruneHeight = maximumBlockToTrim + 1;
repository.getATRepository().setAtPruneHeight(nextPruneHeight);
repository.saveChanges();
repository.executeCheckedUpdate("CHECKPOINT");
// Now prune/trim the ATStatesData, as this currently goes back over a month
return HSQLDBDatabasePruning.pruneATStateData(repository);
}
/*
* Bulk prune ATStatesData to catch up with the now pruned ATStates table
* This uses the existing AT States trimming code but with a much higher end block
*/
private static boolean pruneATStateData(Repository repository) throws DataException {
if (Settings.getInstance().isArchiveEnabled()) {
// Don't prune ATStatesData in archive mode
return true;
}
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk ATStatesData pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
// ATStateData is already trimmed - so carry on from where we left off in the past
int pruneStartHeight = repository.getATRepository().getAtTrimHeight();
LOGGER.info("Starting bulk prune of AT states data - this process could take a while... (approx. 3 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all AT state data up until our latest minus pruneBlockLimit (or our archive height)
if (Controller.isStopping()) {
return false;
}
// Override batch size in the settings because this is a one-off process
final int batchSize = 1000;
final int rowLimitPerBatch = 50000;
int upperBatchHeight = pruneStartHeight + batchSize;
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.trace(String.format("Pruning AT states data between %d and %d...", pruneStartHeight, upperPruneHeight));
int numATStatesPruned = repository.getATRepository().trimAtStates(pruneStartHeight, upperPruneHeight, rowLimitPerBatch);
repository.saveChanges();
if (numATStatesPruned > 0) {
LOGGER.trace(String.format("Pruned %d AT states data rows between blocks %d and %d",
numATStatesPruned, pruneStartHeight, upperPruneHeight));
} else {
repository.getATRepository().setAtTrimHeight(upperBatchHeight);
// No need to rebuild the latest AT states as we aren't currently synchronizing
repository.saveChanges();
LOGGER.debug(String.format("Bumping AT states trim height to %d", upperBatchHeight));
// Can we move onto next batch?
if (upperPrunableHeight > upperBatchHeight) {
pruneStartHeight = upperBatchHeight;
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
public static boolean pruneBlocks(Repository repository) throws SQLException, DataException {
// Only bulk prune AT states if we have never done so before
int pruneHeight = repository.getBlockRepository().getBlockPruneHeight();
if (pruneHeight > 0) {
// Already pruned blocks
return false;
}
if (Settings.getInstance().isArchiveEnabled()) {
// Only proceed if we can see that the archiver has already finished
// This way, if the archiver failed for any reason, we can prune once it has had
// some opportunities to try again
boolean upToDate = BlockArchiveWriter.isArchiverUpToDate(repository);
if (!upToDate) {
return false;
}
}
BlockData latestBlock = repository.getBlockRepository().getLastBlock();
if (latestBlock == null) {
LOGGER.info("Unable to determine blockchain height, necessary for bulk block pruning");
return false;
}
final int blockchainHeight = latestBlock.getHeight();
int upperPrunableHeight = blockchainHeight - Settings.getInstance().getPruneBlockLimit();
int pruneStartHeight = 0;
if (Settings.getInstance().isArchiveEnabled()) {
// Archive mode - don't prune anything that hasn't been archived yet
upperPrunableHeight = Math.min(upperPrunableHeight, repository.getBlockArchiveRepository().getBlockArchiveHeight() - 1);
}
LOGGER.info("Starting bulk prune of blocks - this process could take a while... (approx. 5 mins on high spec)");
while (pruneStartHeight < upperPrunableHeight) {
// Prune all blocks up until our latest minus pruneBlockLimit
int upperBatchHeight = pruneStartHeight + Settings.getInstance().getBlockPruneBatchSize();
int upperPruneHeight = Math.min(upperBatchHeight, upperPrunableHeight);
LOGGER.info(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
repository.saveChanges();
if (numBlocksPruned > 0) {
LOGGER.info(String.format("Pruned %d block%s between %d and %d",
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
pruneStartHeight, upperPruneHeight));
} else {
final int nextPruneHeight = upperPruneHeight + 1;
repository.getBlockRepository().setBlockPruneHeight(nextPruneHeight);
repository.saveChanges();
LOGGER.debug(String.format("Bumping block base prune height to %d", nextPruneHeight));
// Can we move onto next batch?
if (upperPrunableHeight > nextPruneHeight) {
pruneStartHeight = nextPruneHeight;
}
else {
// We've finished pruning
break;
}
}
}
return true;
}
public static void performMaintenance(Repository repository) throws SQLException, DataException {
try {
SplashFrame.getInstance().updateStatus("Performing maintenance...");
// Timeout if the database isn't ready for backing up after 5 minutes
// Nothing else should be using the db at this point, so a timeout shouldn't happen
long timeout = 5 * 60 * 1000L;
repository.performPeriodicMaintenance(timeout);
} catch (TimeoutException e) {
LOGGER.info("Attempt to perform maintenance failed due to timeout: {}", e.getMessage());
}
}
}

View File

@@ -9,7 +9,9 @@ import java.util.stream.Collectors;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.controller.tradebot.BitcoinACCTv1TradeBot;
import org.qortal.gui.SplashFrame;
public class HSQLDBDatabaseUpdates {
@@ -27,9 +29,14 @@ public class HSQLDBDatabaseUpdates {
public static boolean updateDatabase(Connection connection) throws SQLException {
final boolean wasPristine = fetchDatabaseVersion(connection) == 0;
SplashFrame.getInstance().updateStatus("Upgrading database, please wait...");
while (databaseUpdating(connection, wasPristine))
incrementDatabaseVersion(connection);
String text = String.format("Starting Qortal Core v%s...", Controller.getInstance().getVersionStringWithoutPrefix());
SplashFrame.getInstance().updateStatus(text);
return wasPristine;
}
@@ -867,6 +874,30 @@ public class HSQLDBDatabaseUpdates {
stmt.execute("CHECKPOINT");
break;
}
case 35:
// Support for pruning
stmt.execute("ALTER TABLE DatabaseInfo ADD AT_prune_height INT NOT NULL DEFAULT 0");
stmt.execute("ALTER TABLE DatabaseInfo ADD block_prune_height INT NOT NULL DEFAULT 0");
break;
case 36:
// Block archive support
stmt.execute("ALTER TABLE DatabaseInfo ADD block_archive_height INT NOT NULL DEFAULT 0");
// Block archive (lookup table to map signature to height)
// Actual data is stored in archive files outside of the database
stmt.execute("CREATE TABLE BlockArchive (signature BlockSignature, height INTEGER NOT NULL, "
+ "minted_when EpochMillis NOT NULL, minter QortalPublicKey NOT NULL, "
+ "PRIMARY KEY (signature))");
// For finding blocks by height.
stmt.execute("CREATE INDEX BlockArchiveHeightIndex ON BlockArchive (height)");
// For finding blocks by the account that minted them.
stmt.execute("CREATE INDEX BlockArchiveMinterIndex ON BlockArchive (minter)");
// For finding blocks by timestamp or finding height of latest block immediately before timestamp, etc.
stmt.execute("CREATE INDEX BlockArchiveTimestampHeightIndex ON BlockArchive (minted_when, height)");
// Use a separate table space as this table will be very large.
stmt.execute("SET TABLE BlockArchive NEW SPACE");
break;
default:
// nothing to do

View File

@@ -0,0 +1,298 @@
package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.repository.Bootstrap;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
import org.qortal.utils.Triple;
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Iterator;
import java.util.List;
public class HSQLDBImportExport {
private static final Logger LOGGER = LogManager.getLogger(Bootstrap.class);
public static void backupTradeBotStates(Repository repository) throws DataException {
HSQLDBImportExport.backupCurrentTradeBotStates(repository);
HSQLDBImportExport.backupArchivedTradeBotStates(repository);
LOGGER.info("Exported sensitive/node-local data: trade bot states");
}
public static void backupMintingAccounts(Repository repository) throws DataException {
HSQLDBImportExport.backupCurrentMintingAccounts(repository);
LOGGER.info("Exported sensitive/node-local data: minting accounts");
}
/* Trade bot states */
/**
* Backs up the trade bot states currently in the repository, without combining them with past ones
* @param repository
* @throws DataException
*/
private static void backupCurrentTradeBotStates(Repository repository) throws DataException {
try {
Path backupDirectory = HSQLDBImportExport.getExportDirectory(true);
// Load current trade bot data
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
JSONArray currentTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
currentTradeBotDataJson.put(tradeBotDataJson);
}
// Wrap current trade bot data in an object to indicate the type
JSONObject currentTradeBotDataJsonWrapper = new JSONObject();
currentTradeBotDataJsonWrapper.put("type", "tradeBotStates");
currentTradeBotDataJsonWrapper.put("dataset", "current");
currentTradeBotDataJsonWrapper.put("data", currentTradeBotDataJson);
// Write current trade bot data (just the ones currently in the database)
String fileName = Paths.get(backupDirectory.toString(), "TradeBotStates.json").toString();
FileWriter writer = new FileWriter(fileName);
writer.write(currentTradeBotDataJsonWrapper.toString(2));
writer.close();
} catch (DataException | IOException e) {
throw new DataException("Unable to export trade bot states from repository");
}
}
/**
* Backs up the trade bot states currently in the repository to a separate "archive" file,
* making sure to combine them with any unique states already present in the archive.
* @param repository
* @throws DataException
*/
private static void backupArchivedTradeBotStates(Repository repository) throws DataException {
try {
Path backupDirectory = HSQLDBImportExport.getExportDirectory(true);
// Load current trade bot data
List<TradeBotData> allTradeBotData = repository.getCrossChainRepository().getAllTradeBotData();
JSONArray allTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
allTradeBotDataJson.put(tradeBotDataJson);
}
// We need to combine existing archived TradeBotStates data before overwriting
String fileName = Paths.get(backupDirectory.toString(), "TradeBotStatesArchive.json").toString();
File tradeBotStatesBackupFile = new File(fileName);
if (tradeBotStatesBackupFile.exists()) {
String jsonString = new String(Files.readAllBytes(Paths.get(fileName)));
Triple<String, String, JSONArray> parsedJSON = HSQLDBImportExport.parseJSONString(jsonString);
if (parsedJSON.getA() == null || parsedJSON.getC() == null) {
throw new DataException("Missing data when exporting archived trade bot states");
}
String type = parsedJSON.getA();
String dataset = parsedJSON.getB();
JSONArray data = parsedJSON.getC();
if (!type.equals("tradeBotStates") || !dataset.equals("archive")) {
throw new DataException("Format mismatch when exporting archived trade bot states");
}
Iterator<Object> iterator = data.iterator();
while(iterator.hasNext()) {
JSONObject existingTradeBotDataItem = (JSONObject)iterator.next();
String existingTradePrivateKey = (String) existingTradeBotDataItem.get("tradePrivateKey");
// Check if we already have an entry for this trade
boolean found = allTradeBotData.stream().anyMatch(tradeBotData -> Base58.encode(tradeBotData.getTradePrivateKey()).equals(existingTradePrivateKey));
if (found == false)
// Add the data from the backup file to our "allTradeBotDataJson" array as it's not currently in the db
allTradeBotDataJson.put(existingTradeBotDataItem);
}
}
// Wrap all trade bot data in an object to indicate the type
JSONObject allTradeBotDataJsonWrapper = new JSONObject();
allTradeBotDataJsonWrapper.put("type", "tradeBotStates");
allTradeBotDataJsonWrapper.put("dataset", "archive");
allTradeBotDataJsonWrapper.put("data", allTradeBotDataJson);
// Write ALL trade bot data to archive (current plus states that are no longer in the database)
FileWriter writer = new FileWriter(fileName);
writer.write(allTradeBotDataJsonWrapper.toString(2));
writer.close();
} catch (DataException | IOException e) {
throw new DataException("Unable to export trade bot states from repository");
}
}
/* Minting accounts */
/**
* Backs up the minting accounts currently in the repository, without combining them with past ones
* @param repository
* @throws DataException
*/
private static void backupCurrentMintingAccounts(Repository repository) throws DataException {
try {
Path backupDirectory = HSQLDBImportExport.getExportDirectory(true);
// Load current trade bot data
List<MintingAccountData> allMintingAccountData = repository.getAccountRepository().getMintingAccounts();
JSONArray currentMintingAccountJson = new JSONArray();
for (MintingAccountData mintingAccountData : allMintingAccountData) {
JSONObject mintingAccountDataJson = mintingAccountData.toJson();
currentMintingAccountJson.put(mintingAccountDataJson);
}
// Wrap current trade bot data in an object to indicate the type
JSONObject currentMintingAccountDataJsonWrapper = new JSONObject();
currentMintingAccountDataJsonWrapper.put("type", "mintingAccounts");
currentMintingAccountDataJsonWrapper.put("dataset", "current");
currentMintingAccountDataJsonWrapper.put("data", currentMintingAccountJson);
// Write current trade bot data (just the ones currently in the database)
String fileName = Paths.get(backupDirectory.toString(), "MintingAccounts.json").toString();
FileWriter writer = new FileWriter(fileName);
writer.write(currentMintingAccountDataJsonWrapper.toString(2));
writer.close();
} catch (DataException | IOException e) {
throw new DataException("Unable to export minting accounts from repository");
}
}
/* Utils */
/**
* Imports data from supplied file
* Data type is loaded from the file itself, and if missing, TradeBotStates is assumed
*
* @param filename
* @param repository
* @throws DataException
* @throws IOException
*/
public static void importDataFromFile(String filename, Repository repository) throws DataException, IOException {
Path path = Paths.get(filename);
if (!path.toFile().exists()) {
throw new FileNotFoundException(String.format("File doesn't exist: %s", filename));
}
byte[] fileContents = Files.readAllBytes(path);
if (fileContents == null) {
throw new FileNotFoundException(String.format("Unable to read file contents: %s", filename));
}
LOGGER.info(String.format("Importing %s into repository ...", filename));
String jsonString = new String(fileContents);
Triple<String, String, JSONArray> parsedJSON = HSQLDBImportExport.parseJSONString(jsonString);
if (parsedJSON.getA() == null || parsedJSON.getC() == null) {
throw new DataException(String.format("Missing data when importing %s into repository", filename));
}
String type = parsedJSON.getA();
JSONArray data = parsedJSON.getC();
Iterator<Object> iterator = data.iterator();
while(iterator.hasNext()) {
JSONObject dataJsonObject = (JSONObject)iterator.next();
if (type.equals("tradeBotStates")) {
HSQLDBImportExport.importTradeBotDataJSON(dataJsonObject, repository);
}
else if (type.equals("mintingAccounts")) {
HSQLDBImportExport.importMintingAccountDataJSON(dataJsonObject, repository);
}
else {
throw new DataException(String.format("Unrecognized data type when importing %s into repository", filename));
}
}
LOGGER.info(String.format("Imported %s into repository from %s", type, filename));
}
private static void importTradeBotDataJSON(JSONObject tradeBotDataJson, Repository repository) throws DataException {
TradeBotData tradeBotData = TradeBotData.fromJson(tradeBotDataJson);
repository.getCrossChainRepository().save(tradeBotData);
}
private static void importMintingAccountDataJSON(JSONObject mintingAccountDataJson, Repository repository) throws DataException {
MintingAccountData mintingAccountData = MintingAccountData.fromJson(mintingAccountDataJson);
repository.getAccountRepository().save(mintingAccountData);
}
public static Path getExportDirectory(boolean createIfNotExists) throws DataException {
Path backupPath = Paths.get(Settings.getInstance().getExportPath());
if (createIfNotExists) {
// Create the qortal-backup folder if it doesn't exist
try {
Files.createDirectories(backupPath);
} catch (IOException e) {
LOGGER.info(String.format("Unable to create %s folder", backupPath.toString()));
throw new DataException(String.format("Unable to create %s folder", backupPath.toString()));
}
}
return backupPath;
}
/**
* Parses a JSON string and returns "data", "type", and "dataset" fields.
* In the case of legacy JSON files with no type, they are assumed to be TradeBotStates archives,
* as we had never implemented this for any other types.
*
* @param jsonString
* @return Triple<String, String, JSONArray> (type, dataset, data)
*/
private static Triple<String, String, JSONArray> parseJSONString(String jsonString) throws DataException {
String type = null;
String dataset = null;
JSONArray data = null;
try {
// Firstly try importing the new format
JSONObject jsonData = new JSONObject(jsonString);
if (jsonData != null && jsonData.getString("type") != null) {
type = jsonData.getString("type");
dataset = jsonData.getString("dataset");
data = jsonData.getJSONArray("data");
}
} catch (JSONException e) {
// Could be a legacy format which didn't contain a type or any other outer keys, so try importing that
// Treat these as TradeBotStates archives, given that this was the only type previously implemented
try {
type = "tradeBotStates";
dataset = "archive";
data = new JSONArray(jsonString);
} catch (JSONException e2) {
// Still failed, so give up
throw new DataException("Couldn't import JSON file");
}
}
return new Triple(type, dataset, data);
}
}

View File

@@ -2,7 +2,6 @@ package org.qortal.repository.hsqldb;
import java.awt.TrayIcon.MessageType;
import java.io.File;
import java.io.FileWriter;
import java.io.IOException;
import java.math.BigDecimal;
import java.nio.file.Files;
@@ -17,39 +16,20 @@ import java.sql.SQLException;
import java.sql.Savepoint;
import java.sql.Statement;
import java.util.*;
import java.util.concurrent.TimeoutException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Stream;
import org.json.JSONArray;
import org.json.JSONObject;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.crypto.Crypto;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.globalization.Translator;
import org.qortal.gui.SysTray;
import org.qortal.repository.ATRepository;
import org.qortal.repository.AccountRepository;
import org.qortal.repository.ArbitraryRepository;
import org.qortal.repository.AssetRepository;
import org.qortal.repository.BlockRepository;
import org.qortal.repository.ChatRepository;
import org.qortal.repository.CrossChainRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.GroupRepository;
import org.qortal.repository.MessageRepository;
import org.qortal.repository.NameRepository;
import org.qortal.repository.NetworkRepository;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.TransactionRepository;
import org.qortal.repository.VotingRepository;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.transaction.HSQLDBTransactionRepository;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
public class HSQLDBRepository implements Repository {
@@ -69,12 +49,14 @@ public class HSQLDBRepository implements Repository {
protected final Map<String, PreparedStatement> preparedStatementCache = new HashMap<>();
// We want the same object corresponding to the actual DB
protected final Object trimHeightsLock = RepositoryManager.getRepositoryFactory();
protected final Object latestATStatesLock = RepositoryManager.getRepositoryFactory();
private final ATRepository atRepository = new HSQLDBATRepository(this);
private final AccountRepository accountRepository = new HSQLDBAccountRepository(this);
private final ArbitraryRepository arbitraryRepository = new HSQLDBArbitraryRepository(this);
private final AssetRepository assetRepository = new HSQLDBAssetRepository(this);
private final BlockRepository blockRepository = new HSQLDBBlockRepository(this);
private final BlockArchiveRepository blockArchiveRepository = new HSQLDBBlockArchiveRepository(this);
private final ChatRepository chatRepository = new HSQLDBChatRepository(this);
private final CrossChainRepository crossChainRepository = new HSQLDBCrossChainRepository(this);
private final GroupRepository groupRepository = new HSQLDBGroupRepository(this);
@@ -142,6 +124,11 @@ public class HSQLDBRepository implements Repository {
return this.blockRepository;
}
@Override
public BlockArchiveRepository getBlockArchiveRepository() {
return this.blockArchiveRepository;
}
@Override
public ChatRepository getChatRepository() {
return this.chatRepository;
@@ -281,7 +268,7 @@ public class HSQLDBRepository implements Repository {
public void close() throws DataException {
// Already closed? No need to do anything but maybe report double-call
if (this.connection == null) {
LOGGER.warn("HSQLDBRepository.close() called when repository already closed", new Exception("Repository already closed"));
LOGGER.warn("HSQLDBRepository.close() called when repository already closed. This is expected when bootstrapping.");
return;
}
@@ -393,133 +380,104 @@ public class HSQLDBRepository implements Repository {
}
@Override
public void backup(boolean quick) throws DataException {
if (!quick)
// First perform a CHECKPOINT
public void backup(boolean quick, String name, Long timeout) throws DataException, TimeoutException {
synchronized (CHECKPOINT_LOCK) {
// We can only perform a CHECKPOINT if no other HSQLDB session is mid-transaction,
// otherwise the CHECKPOINT blocks for COMMITs and other threads can't open HSQLDB sessions
// due to HSQLDB blocking until CHECKPOINT finishes - i.e. deadlock.
// Since we don't want to give up too easily, it's best to wait until the other transaction
// count reaches zero, and then continue.
this.blockUntilNoOtherTransactions(timeout);
if (!quick)
// First perform a CHECKPOINT
try (Statement stmt = this.connection.createStatement()) {
LOGGER.info("Performing maintenance - this will take a while...");
stmt.execute("CHECKPOINT");
stmt.execute("CHECKPOINT DEFRAG");
LOGGER.info("Maintenance completed");
} catch (SQLException e) {
throw new DataException("Unable to prepare repository for backup");
}
// Clean out any previous backup
try {
String connectionUrl = this.connection.getMetaData().getURL();
String dbPathname = getDbPathname(connectionUrl);
if (dbPathname == null)
throw new DataException("Unable to locate repository for backup?");
// Doesn't really make sense to backup an in-memory database...
if (dbPathname.equals("mem")) {
LOGGER.debug("Ignoring request to backup in-memory repository!");
return;
}
String backupUrl = buildBackupUrl(dbPathname, name);
String backupPathname = getDbPathname(backupUrl);
if (backupPathname == null)
throw new DataException("Unable to determine location for repository backup?");
Path backupDirPath = Paths.get(backupPathname).getParent();
String backupDirPathname = backupDirPath.toString();
try (Stream<Path> paths = Files.walk(backupDirPath)) {
paths.sorted(Comparator.reverseOrder())
.map(Path::toFile)
.filter(file -> file.getPath().startsWith(backupDirPathname))
.forEach(File::delete);
}
} catch (NoSuchFileException e) {
// Nothing to remove
} catch (SQLException | IOException e) {
throw new DataException("Unable to remove previous repository backup");
}
// Actually create backup
try (Statement stmt = this.connection.createStatement()) {
stmt.execute("CHECKPOINT DEFRAG");
LOGGER.info("Backing up repository...");
stmt.execute(String.format("BACKUP DATABASE TO '%s/' BLOCKING AS FILES", name));
LOGGER.info("Backup completed");
} catch (SQLException e) {
throw new DataException("Unable to prepare repository for backup");
throw new DataException("Unable to backup repository");
}
// Clean out any previous backup
try {
String connectionUrl = this.connection.getMetaData().getURL();
String dbPathname = getDbPathname(connectionUrl);
if (dbPathname == null)
throw new DataException("Unable to locate repository for backup?");
// Doesn't really make sense to backup an in-memory database...
if (dbPathname.equals("mem")) {
LOGGER.debug("Ignoring request to backup in-memory repository!");
return;
}
String backupUrl = buildBackupUrl(dbPathname);
String backupPathname = getDbPathname(backupUrl);
if (backupPathname == null)
throw new DataException("Unable to determine location for repository backup?");
Path backupDirPath = Paths.get(backupPathname).getParent();
String backupDirPathname = backupDirPath.toString();
try (Stream<Path> paths = Files.walk(backupDirPath)) {
paths.sorted(Comparator.reverseOrder())
.map(Path::toFile)
.filter(file -> file.getPath().startsWith(backupDirPathname))
.forEach(File::delete);
}
} catch (NoSuchFileException e) {
// Nothing to remove
} catch (SQLException | IOException e) {
throw new DataException("Unable to remove previous repository backup");
}
// Actually create backup
try (Statement stmt = this.connection.createStatement()) {
stmt.execute("BACKUP DATABASE TO 'backup/' BLOCKING AS FILES");
} catch (SQLException e) {
throw new DataException("Unable to backup repository");
}
}
@Override
public void performPeriodicMaintenance() throws DataException {
// Defrag DB - takes a while!
try (Statement stmt = this.connection.createStatement()) {
LOGGER.info("performing maintenance - this will take a while");
stmt.execute("CHECKPOINT");
stmt.execute("CHECKPOINT DEFRAG");
LOGGER.info("maintenance completed");
} catch (SQLException e) {
throw new DataException("Unable to defrag repository");
public void performPeriodicMaintenance(Long timeout) throws DataException, TimeoutException {
synchronized (CHECKPOINT_LOCK) {
// We can only perform a CHECKPOINT if no other HSQLDB session is mid-transaction,
// otherwise the CHECKPOINT blocks for COMMITs and other threads can't open HSQLDB sessions
// due to HSQLDB blocking until CHECKPOINT finishes - i.e. deadlock.
// Since we don't want to give up too easily, it's best to wait until the other transaction
// count reaches zero, and then continue.
this.blockUntilNoOtherTransactions(timeout);
// Defrag DB - takes a while!
try (Statement stmt = this.connection.createStatement()) {
LOGGER.info("performing maintenance - this will take a while");
stmt.execute("CHECKPOINT");
stmt.execute("CHECKPOINT DEFRAG");
LOGGER.info("maintenance completed");
} catch (SQLException e) {
throw new DataException("Unable to defrag repository");
}
}
}
@Override
public void exportNodeLocalData() throws DataException {
// Create the qortal-backup folder if it doesn't exist
Path backupPath = Paths.get("qortal-backup");
try {
Files.createDirectories(backupPath);
} catch (IOException e) {
LOGGER.info("Unable to create backup folder");
throw new DataException("Unable to create backup folder");
}
try {
// Load trade bot data
List<TradeBotData> allTradeBotData = this.getCrossChainRepository().getAllTradeBotData();
JSONArray allTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
allTradeBotDataJson.put(tradeBotDataJson);
}
// We need to combine existing TradeBotStates data before overwriting
String fileName = "qortal-backup/TradeBotStates.json";
File tradeBotStatesBackupFile = new File(fileName);
if (tradeBotStatesBackupFile.exists()) {
String jsonString = new String(Files.readAllBytes(Paths.get(fileName)));
JSONArray allExistingTradeBotData = new JSONArray(jsonString);
Iterator<Object> iterator = allExistingTradeBotData.iterator();
while(iterator.hasNext()) {
JSONObject existingTradeBotData = (JSONObject)iterator.next();
String existingTradePrivateKey = (String) existingTradeBotData.get("tradePrivateKey");
// Check if we already have an entry for this trade
boolean found = allTradeBotData.stream().anyMatch(tradeBotData -> Base58.encode(tradeBotData.getTradePrivateKey()).equals(existingTradePrivateKey));
if (found == false)
// We need to add this to our list
allTradeBotDataJson.put(existingTradeBotData);
}
}
FileWriter writer = new FileWriter(fileName);
writer.write(allTradeBotDataJson.toString());
writer.close();
LOGGER.info("Exported sensitive/node-local data: trade bot states");
} catch (DataException | IOException e) {
throw new DataException("Unable to export trade bot states from repository");
}
HSQLDBImportExport.backupTradeBotStates(this);
HSQLDBImportExport.backupMintingAccounts(this);
}
@Override
public void importDataFromFile(String filename) throws DataException {
LOGGER.info(() -> String.format("Importing data into repository from %s", filename));
try {
String jsonString = new String(Files.readAllBytes(Paths.get(filename)));
JSONArray tradeBotDataToImport = new JSONArray(jsonString);
Iterator<Object> iterator = tradeBotDataToImport.iterator();
while(iterator.hasNext()) {
JSONObject tradeBotDataJson = (JSONObject)iterator.next();
TradeBotData tradeBotData = TradeBotData.fromJson(tradeBotDataJson);
this.getCrossChainRepository().save(tradeBotData);
}
} catch (IOException e) {
throw new DataException("Unable to import sensitive/node-local trade bot states to repository: " + e.getMessage());
}
LOGGER.info(() -> String.format("Imported trade bot states into repository from %s", filename));
public void importDataFromFile(String filename) throws DataException, IOException {
HSQLDBImportExport.importDataFromFile(filename, this);
}
@Override
@@ -541,22 +499,22 @@ public class HSQLDBRepository implements Repository {
return matcher.group(2);
}
private static String buildBackupUrl(String dbPathname) {
private static String buildBackupUrl(String dbPathname, String backupName) {
Path oldRepoPath = Paths.get(dbPathname);
Path oldRepoDirPath = oldRepoPath.getParent();
Path oldRepoFilePath = oldRepoPath.getFileName();
// Try to open backup. We need to remove "create=true" and insert "backup" dir before final filename.
String backupUrlTemplate = "jdbc:hsqldb:file:%s%sbackup%s%s;create=false;hsqldb.full_log_replay=true";
return String.format(backupUrlTemplate, oldRepoDirPath.toString(), File.separator, File.separator, oldRepoFilePath.toString());
String backupUrlTemplate = "jdbc:hsqldb:file:%s%s%s%s%s;create=false;hsqldb.full_log_replay=true";
return String.format(backupUrlTemplate, oldRepoDirPath.toString(), File.separator, backupName, File.separator, oldRepoFilePath.toString());
}
/* package */ static void attemptRecovery(String connectionUrl) throws DataException {
/* package */ static void attemptRecovery(String connectionUrl, String name) throws DataException {
String dbPathname = getDbPathname(connectionUrl);
if (dbPathname == null)
throw new DataException("Unable to locate repository for backup?");
String backupUrl = buildBackupUrl(dbPathname);
String backupUrl = buildBackupUrl(dbPathname, name);
Path oldRepoDirPath = Paths.get(dbPathname).getParent();
// Attempt connection to backup to see if it is viable
@@ -1059,4 +1017,51 @@ public class HSQLDBRepository implements Repository {
return DEADLOCK_ERROR_CODE.equals(e.getErrorCode());
}
private int otherTransactionsCount() throws DataException {
// We can only perform a CHECKPOINT if no other HSQLDB session is mid-transaction,
// otherwise the CHECKPOINT blocks for COMMITs and other threads can't open HSQLDB sessions
// due to HSQLDB blocking until CHECKPOINT finishes - i.e. deadlock
String sql = "SELECT COUNT(*) "
+ "FROM Information_schema.system_sessions "
+ "WHERE transaction = TRUE AND session_id != ?";
try {
PreparedStatement pstmt = this.cachePreparedStatement(sql);
pstmt.setLong(1, this.sessionId);
if (!pstmt.execute())
throw new DataException("Unable to check repository session status");
try (ResultSet resultSet = pstmt.getResultSet()) {
if (resultSet == null || !resultSet.next())
// Failed to even find HSQLDB session info!
throw new DataException("No results when checking repository session status");
int transactionCount = resultSet.getInt(1);
return transactionCount;
}
} catch (SQLException e) {
throw new DataException("Unable to check repository session status", e);
}
}
private void blockUntilNoOtherTransactions(Long timeout) throws DataException, TimeoutException {
try {
long startTime = System.currentTimeMillis();
while (this.otherTransactionsCount() > 0) {
// Wait and try again
LOGGER.debug("Waiting for repository...");
Thread.sleep(1000L);
if (timeout != null) {
if (System.currentTimeMillis() - startTime >= timeout) {
throw new TimeoutException("Timed out waiting for repository to become available");
}
}
}
} catch (InterruptedException e) {
throw new DataException("Interrupted before repository became available");
}
}
}

View File

@@ -54,7 +54,7 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
throw new DataException("Unable to read repository: " + e.getMessage(), e);
// Attempt recovery?
HSQLDBRepository.attemptRecovery(connectionUrl);
HSQLDBRepository.attemptRecovery(connectionUrl, "backup");
}
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());

View File

@@ -5,6 +5,7 @@ import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.List;
import java.util.Locale;
import javax.xml.bind.JAXBContext;
@@ -89,6 +90,12 @@ public class Settings {
private long repositoryBackupInterval = 0; // ms
/** Whether to show a notification when we backup repository. */
private boolean showBackupNotification = false;
/** Minimum time between repository maintenance attempts (ms) */
private long repositoryMaintenanceMinInterval = 7 * 24 * 60 * 60 * 1000L; // 7 days (ms) default
/** Maximum time between repository maintenance attempts (ms) (0 if disabled). */
private long repositoryMaintenanceMaxInterval = 30 * 24 * 60 * 60 * 1000L; // 30 days (ms) default
/** Whether to show a notification when we run scheduled maintenance. */
private boolean showMaintenanceNotification = false;
/** How long between repository checkpoints (ms). */
private long repositoryCheckpointInterval = 60 * 60 * 1000L; // 1 hour (ms) default
/** Whether to show a notification when we perform repository 'checkpoint'. */
@@ -112,6 +119,36 @@ public class Settings {
* This has a significant effect on execution time. */
private int onlineSignaturesTrimBatchSize = 100; // blocks
/** Whether we should prune old data to reduce database size
* This prevents the node from being able to serve older blocks */
private boolean topOnly = false;
/** The amount of recent blocks we should keep when pruning */
private int pruneBlockLimit = 1450;
/** How often to attempt AT state pruning (ms). */
private long atStatesPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable AT states.<br>
* This has a significant effect on execution time. */
private int atStatesPruneBatchSize = 25; // blocks
/** How often to attempt block pruning (ms). */
private long blockPruneInterval = 3219L; // milliseconds
/** Block height range to scan for prunable blocks.<br>
* This has a significant effect on execution time. */
private int blockPruneBatchSize = 10000; // blocks
/** Whether we should archive old data to reduce the database size */
private boolean archiveEnabled = true;
/** How often to attempt archiving (ms). */
private long archiveInterval = 7171L; // milliseconds
/** Whether to automatically bootstrap instead of syncing from genesis */
private boolean bootstrap = true;
// Peer-to-peer related
private boolean isTestNet = false;
/** Port number for inbound peer-to-peer connections. */
@@ -157,6 +194,19 @@ public class Settings {
private String repositoryPath = "db";
/** Repository connection pool size. Needs to be a bit bigger than maxNetworkThreadPoolSize */
private int repositoryConnectionPoolSize = 100;
private List<String> fixedNetwork;
// Export/import
private String exportPath = "qortal-backup";
// Bootstrap
private String bootstrapFilenamePrefix = "";
// Bootstrap sources
private String[] bootstrapHosts = new String[] {
"http://bootstrap.qortal.org",
"http://cinfu1.crowetic.com"
};
// Auto-update sources
private String[] autoUpdateRepos = new String[] {
@@ -476,6 +526,14 @@ public class Settings {
return this.repositoryConnectionPoolSize;
}
public String getExportPath() {
return this.exportPath;
}
public String getBootstrapFilenamePrefix() {
return this.bootstrapFilenamePrefix;
}
public boolean isAutoUpdateEnabled() {
return this.autoUpdateEnabled;
}
@@ -484,6 +542,10 @@ public class Settings {
return this.autoUpdateRepos;
}
public String[] getBootstrapHosts() {
return this.bootstrapHosts;
}
public String getListsPath() {
return this.listsPath;
}
@@ -504,6 +566,18 @@ public class Settings {
return this.showBackupNotification;
}
public long getRepositoryMaintenanceMinInterval() {
return this.repositoryMaintenanceMinInterval;
}
public long getRepositoryMaintenanceMaxInterval() {
return this.repositoryMaintenanceMaxInterval;
}
public boolean getShowMaintenanceNotification() {
return this.showMaintenanceNotification;
}
public long getRepositoryCheckpointInterval() {
return this.repositoryCheckpointInterval;
}
@@ -512,6 +586,10 @@ public class Settings {
return this.showCheckpointNotification;
}
public List<String> getFixedNetwork() {
return fixedNetwork;
}
public long getAtStatesMaxLifetime() {
return this.atStatesMaxLifetime;
}
@@ -536,4 +614,45 @@ public class Settings {
return this.onlineSignaturesTrimBatchSize;
}
public boolean isTopOnly() {
return this.topOnly;
}
public int getPruneBlockLimit() {
return this.pruneBlockLimit;
}
public long getAtStatesPruneInterval() {
return this.atStatesPruneInterval;
}
public int getAtStatesPruneBatchSize() {
return this.atStatesPruneBatchSize;
}
public long getBlockPruneInterval() {
return this.blockPruneInterval;
}
public int getBlockPruneBatchSize() {
return this.blockPruneBatchSize;
}
public boolean isArchiveEnabled() {
if (this.topOnly) {
return false;
}
return this.archiveEnabled;
}
public long getArchiveInterval() {
return this.archiveInterval;
}
public boolean getBootstrap() {
return this.bootstrap;
}
}

View File

@@ -0,0 +1,78 @@
package org.qortal.utils;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import java.util.List;
public class BlockArchiveUtils {
/**
* importFromArchive
* <p>
* Reads the requested block range from the archive
* and imports the BlockData and AT state data hashes
* This can be used to convert a block archive back
* into the HSQLDB, in order to make it SQL-compatible
* again.
* <p>
* Note: calls discardChanges() and saveChanges(), so
* make sure that you commit any existing repository
* changes before calling this method.
*
* @param startHeight The earliest block to import
* @param endHeight The latest block to import
* @param repository A clean repository session
* @throws DataException
*/
public static void importFromArchive(int startHeight, int endHeight, Repository repository) throws DataException {
repository.discardChanges();
final int requestedRange = endHeight+1-startHeight;
List<Triple<BlockData, List<TransactionData>, List<ATStateData>>> blockInfoList =
BlockArchiveReader.getInstance().fetchBlocksFromRange(startHeight, endHeight);
// Ensure that we have received all of the requested blocks
if (blockInfoList == null || blockInfoList.isEmpty()) {
throw new IllegalStateException("No blocks found when importing from archive");
}
if (blockInfoList.size() != requestedRange) {
throw new IllegalStateException("Non matching block count when importing from archive");
}
Triple<BlockData, List<TransactionData>, List<ATStateData>> firstBlock = blockInfoList.get(0);
if (firstBlock == null || firstBlock.getA().getHeight() != startHeight) {
throw new IllegalStateException("Non matching first block when importing from archive");
}
if (blockInfoList.size() > 0) {
Triple<BlockData, List<TransactionData>, List<ATStateData>> lastBlock =
blockInfoList.get(blockInfoList.size() - 1);
if (lastBlock == null || lastBlock.getA().getHeight() != endHeight) {
throw new IllegalStateException("Non matching last block when importing from archive");
}
}
// Everything seems okay, so go ahead with the import
for (Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo : blockInfoList) {
try {
// Save block
repository.getBlockRepository().save(blockInfo.getA());
// Save AT state data hashes
for (ATStateData atStateData : blockInfo.getC()) {
atStateData.setHeight(blockInfo.getA().getHeight());
repository.getATRepository().save(atStateData);
}
} catch (DataException e) {
repository.discardChanges();
throw new IllegalStateException("Unable to import blocks from archive");
}
}
repository.saveChanges();
}
}

View File

@@ -0,0 +1,85 @@
//
// Code originally written by memorynotfound
// https://memorynotfound.com/java-7z-seven-zip-example-compress-decompress-file/
// Modified Sept 2021 by Qortal Core dev team
//
package org.qortal.utils;
import org.apache.commons.compress.archivers.sevenz.SevenZArchiveEntry;
import org.apache.commons.compress.archivers.sevenz.SevenZFile;
import org.apache.commons.compress.archivers.sevenz.SevenZOutputFile;
import org.qortal.gui.SplashFrame;
import java.io.*;
public class SevenZ {
private SevenZ() {
}
public static void compress(String outputPath, File... files) throws IOException {
try (SevenZOutputFile out = new SevenZOutputFile(new File(outputPath))){
for (File file : files){
addToArchiveCompression(out, file, ".");
}
}
}
public static void decompress(String in, File destination) throws IOException {
SevenZFile sevenZFile = new SevenZFile(new File(in));
SevenZArchiveEntry entry;
while ((entry = sevenZFile.getNextEntry()) != null){
if (entry.isDirectory()){
continue;
}
File curfile = new File(destination, entry.getName());
File parent = curfile.getParentFile();
if (!parent.exists()) {
parent.mkdirs();
}
long fileSize = entry.getSize();
FileOutputStream out = new FileOutputStream(curfile);
byte[] b = new byte[1024 * 1024];
int count;
long extracted = 0;
while ((count = sevenZFile.read(b)) > 0) {
out.write(b, 0, count);
extracted += count;
int progress = (int)((double)extracted / (double)fileSize * 100);
SplashFrame.getInstance().updateStatus(String.format("Extracting %s... (%d%%)", curfile.getName(), progress));
}
out.close();
}
}
private static void addToArchiveCompression(SevenZOutputFile out, File file, String dir) throws IOException {
String name = dir + File.separator + file.getName();
if (file.isFile()){
SevenZArchiveEntry entry = out.createArchiveEntry(file, name);
out.putArchiveEntry(entry);
FileInputStream in = new FileInputStream(file);
byte[] b = new byte[8192];
int count = 0;
while ((count = in.read(b)) > 0) {
out.write(b, 0, count);
}
out.closeArchiveEntry();
} else if (file.isDirectory()) {
File[] children = file.listFiles();
if (children != null){
for (File child : children){
addToArchiveCompression(out, child, name);
}
}
} else {
System.out.println(file.getName() + " is not supported");
}
}
}

View File

@@ -21,6 +21,8 @@ CREATING_BACKUP_OF_DB_FILES = Creating backup of database files...
DB_BACKUP = Database Backup
DB_MAINTENANCE = Database Maintenance
DB_CHECKPOINT = Database Checkpoint
EXIT = Exit
@@ -33,8 +35,10 @@ OPEN_UI = Open UI
PERFORMING_DB_CHECKPOINT = Saving uncommitted database changes...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Synchronize clock
SYNCHRONIZING_BLOCKCHAIN = Synchronizing
SYNCHRONIZING_CLOCK = Synchronizing clock
SYNCHRONIZING_CLOCK = Synchronizing clock

View File

@@ -21,6 +21,8 @@ CREATING_BACKUP_OF_DB_FILES = Luodaan varmuuskopio tietokannan tiedostoista...
DB_BACKUP = Tietokannan varmuuskopio
DB_MAINTENANCE = Database Maintenance
DB_CHECKPOINT = Tietokannan varmistuspiste
EXIT = Pois
@@ -33,8 +35,10 @@ OPEN_UI = Avaa UI
PERFORMING_DB_CHECKPOINT = Tallentaa kommittoidut tietokantamuutokset...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Synkronisoi kello
SYNCHRONIZING_BLOCKCHAIN = Synkronisoi
SYNCHRONIZING_CLOCK = Synkronisoi kelloa
SYNCHRONIZING_CLOCK = Synkronisoi kelloa

View File

@@ -23,6 +23,8 @@ CREATING_BACKUP_OF_DB_FILES = Adatbázis fájlok biztonsági mentésének létre
DB_BACKUP = Adatbázis biztonsági mentése
DB_MAINTENANCE = Database Maintenance
DB_CHECKPOINT = Adatbázis-ellenőrzőpont
EXIT = Kilépés
@@ -35,8 +37,10 @@ OPEN_UI = Felhasználói eszköz megnyitása
PERFORMING_DB_CHECKPOINT = Mentetlen adatbázis-módosítások mentése...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Óra-szinkronizálás megkezdése
SYNCHRONIZING_BLOCKCHAIN = Szinkronizálás
SYNCHRONIZING_CLOCK = Óra-szinkronizálás folyamatban
SYNCHRONIZING_CLOCK = Óra-szinkronizálás folyamatban

View File

@@ -22,6 +22,8 @@ CREATING_BACKUP_OF_DB_FILES = Creazione di backup dei file di database...
DB_BACKUP = Backup del database
DB_MAINTENANCE = Database Maintenance
DB_CHECKPOINT = Punto di controllo del database
EXIT = Uscita
@@ -34,8 +36,10 @@ OPEN_UI = Apri UI
PERFORMING_DB_CHECKPOINT = Salvataggio delle modifiche al database non salvate...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Sincronizza orologio
SYNCHRONIZING_BLOCKCHAIN = Sincronizzando
SYNCHRONIZING_CLOCK = Sincronizzando orologio
SYNCHRONIZING_CLOCK = Sincronizzando orologio

View File

@@ -33,8 +33,10 @@ OPEN_UI = Open UI
PERFORMING_DB_CHECKPOINT = Nieuwe veranderingen aan database worden opgeslagen...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Synchronizeer klok
SYNCHRONIZING_BLOCKCHAIN = Aan het synchronizeren
SYNCHRONIZING_CLOCK = Klok wordt gesynchronizeerd
SYNCHRONIZING_CLOCK = Klok wordt gesynchronizeerd

View File

@@ -21,6 +21,8 @@ CREATING_BACKUP_OF_DB_FILES = Создание резервной копии ф
DB_BACKUP = Резервное копирование базы данных
DB_MAINTENANCE = Database Maintenance
EXIT = Выход
MINTING_DISABLED = Чеканка отключена
@@ -31,8 +33,10 @@ OPEN_UI = Открыть пользовательский интерфейс
PERFORMING_DB_CHECKPOINT = Saving uncommitted database changes...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = Синхронизировать время
SYNCHRONIZING_BLOCKCHAIN = Синхронизация цепи
SYNCHRONIZING_CLOCK = Проверка времени
SYNCHRONIZING_CLOCK = Проверка времени

View File

@@ -33,8 +33,10 @@ OPEN_UI = 开启Qortal界面
PERFORMING_DB_CHECKPOINT = Saving uncommitted database changes...
PERFORMING_DB_MAINTENANCE = Performing scheduled maintenance...
SYNCHRONIZE_CLOCK = 同步时钟
SYNCHRONIZING_BLOCKCHAIN = 正在同步区块链
SYNCHRONIZING_CLOCK = 正在同步时钟
SYNCHRONIZING_CLOCK = 正在同步时钟

View File

@@ -0,0 +1,705 @@
package org.qortal.test;
import org.apache.commons.io.FileUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.HSQLDBDatabaseArchiving;
import org.qortal.repository.hsqldb.HSQLDBDatabasePruning;
import org.qortal.repository.hsqldb.HSQLDBRepository;
import org.qortal.settings.Settings;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.transaction.DeployAtTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.transform.TransformationException;
import org.qortal.utils.BlockArchiveUtils;
import org.qortal.utils.NTP;
import org.qortal.utils.Triple;
import java.io.File;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.sql.SQLException;
import java.util.List;
import static org.junit.Assert.*;
public class BlockArchiveTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useSettings("test-settings-v2-block-archive.json");
NTP.setFixedOffset(Settings.getInstance().getTestNtpOffset());
this.deleteArchiveDirectory();
}
@After
public void afterTest() throws DataException {
this.deleteArchiveDirectory();
}
@Test
public void testWriter() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(900 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
}
}
@Test
public void testWriterAndReader() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(900 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Read block 2 from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getA();
// Read block 2 from the repository
BlockData block2RepositoryData = repository.getBlockRepository().fromHeight(2);
// Ensure the values match
assertEquals(block2ArchiveData.getHeight(), block2RepositoryData.getHeight());
assertArrayEquals(block2ArchiveData.getSignature(), block2RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block2ArchiveData.getOnlineAccountsCount());
// Read block 900 from the archive
Triple<BlockData, List<TransactionData>, List<ATStateData>> block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getA();
// Read block 900 from the repository
BlockData block900RepositoryData = repository.getBlockRepository().fromHeight(900);
// Ensure the values match
assertEquals(block900ArchiveData.getHeight(), block900RepositoryData.getHeight());
assertArrayEquals(block900ArchiveData.getSignature(), block900RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block900ArchiveData.getOnlineAccountsCount());
}
}
@Test
public void testArchivedAtStates() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// 9 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(10);
repository.getATRepository().setAtTrimHeight(10);
// Check the max archive height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(9, maximumArchiveHeight);
// Write blocks 2-9 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(9 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(9 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Check blocks 3-9
for (Integer testHeight = 2; testHeight <= 9; testHeight++) {
// Read a block from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(testHeight);
BlockData archivedBlockData = blockInfo.getA();
ATStateData archivedAtStateData = blockInfo.getC().isEmpty() ? null : blockInfo.getC().get(0);
List<TransactionData> archivedTransactions = blockInfo.getB();
// Read the same block from the repository
BlockData repositoryBlockData = repository.getBlockRepository().fromHeight(testHeight);
ATStateData repositoryAtStateData = repository.getATRepository().getATStateAtHeight(atAddress, testHeight);
// Ensure the repository has full AT state data
assertNotNull(repositoryAtStateData.getStateHash());
assertNotNull(repositoryAtStateData.getStateData());
// Check the archived AT state
if (testHeight == 2) {
// Block 2 won't have an AT state hash because it's initial (and has the DEPLOY_AT in the same block)
assertNull(archivedAtStateData);
assertEquals(1, archivedTransactions.size());
assertEquals(Transaction.TransactionType.DEPLOY_AT, archivedTransactions.get(0).getType());
}
else {
// For blocks 3+, ensure the archive has the AT state data, but not the hashes
assertNotNull(archivedAtStateData.getStateHash());
assertNull(archivedAtStateData.getStateData());
// They also shouldn't have any transactions
assertTrue(archivedTransactions.isEmpty());
}
// Also check the online accounts count and height
assertEquals(1, archivedBlockData.getOnlineAccountsCount());
assertEquals(testHeight, archivedBlockData.getHeight());
// Ensure the values match
assertEquals(archivedBlockData.getHeight(), repositoryBlockData.getHeight());
assertArrayEquals(archivedBlockData.getSignature(), repositoryBlockData.getSignature());
assertEquals(archivedBlockData.getOnlineAccountsCount(), repositoryBlockData.getOnlineAccountsCount());
assertArrayEquals(archivedBlockData.getMinterSignature(), repositoryBlockData.getMinterSignature());
assertEquals(archivedBlockData.getATCount(), repositoryBlockData.getATCount());
assertEquals(archivedBlockData.getOnlineAccountsCount(), repositoryBlockData.getOnlineAccountsCount());
assertArrayEquals(archivedBlockData.getReference(), repositoryBlockData.getReference());
assertEquals(archivedBlockData.getTimestamp(), repositoryBlockData.getTimestamp());
assertEquals(archivedBlockData.getATFees(), repositoryBlockData.getATFees());
assertEquals(archivedBlockData.getTotalFees(), repositoryBlockData.getTotalFees());
assertEquals(archivedBlockData.getTransactionCount(), repositoryBlockData.getTransactionCount());
assertArrayEquals(archivedBlockData.getTransactionsSignature(), repositoryBlockData.getTransactionsSignature());
if (testHeight != 2) {
assertArrayEquals(archivedAtStateData.getStateHash(), repositoryAtStateData.getStateHash());
}
}
// Check block 10 (unarchived)
BlockArchiveReader reader = BlockArchiveReader.getInstance();
Triple<BlockData, List<TransactionData>, List<ATStateData>> blockInfo = reader.fetchBlockAtHeight(10);
assertNull(blockInfo);
}
}
@Test
public void testArchiveAndPrune() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(901);
repository.saveChanges();
assertEquals(901, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Ensure the SQL repository contains blocks 2 and 900...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(900));
// Prune all the archived blocks
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 900);
assertEquals(900-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(901);
// Prune the AT states for the archived blocks
repository.getATRepository().rebuildLatestAtStates();
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 900);
assertEquals(900-1, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(901);
// Now ensure the SQL repository is missing blocks 2 and 900...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(900));
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(901));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
}
}
@Test
public void testBulkArchiveAndPrune() throws DataException, SQLException {
try (final Repository repository = RepositoryManager.getRepository()) {
HSQLDBRepository hsqldb = (HSQLDBRepository) repository;
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Check the current archive height
assertEquals(0, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Write blocks 2-900 to the archive (using bulk method)
int fileSizeTarget = 425000; // Pre-calculated size of 900 blocks
assertTrue(HSQLDBDatabaseArchiving.buildBlockArchive(repository, fileSizeTarget));
// Ensure the block archive height has increased
assertEquals(901, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the SQL repository contains blocks 2 and 900...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(900));
// Check the current prune heights
assertEquals(0, repository.getBlockRepository().getBlockPruneHeight());
assertEquals(0, repository.getATRepository().getAtPruneHeight());
// Prior to archiving or pruning, ensure blocks 2 to 1002 and their AT states are available in the db
for (int i=2; i<=1002; i++) {
assertNotNull(repository.getBlockRepository().fromHeight(i));
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStates);
assertEquals(1, atStates.size());
}
// Prune all the archived blocks and AT states (using bulk method)
assertTrue(HSQLDBDatabasePruning.pruneBlocks(hsqldb));
assertTrue(HSQLDBDatabasePruning.pruneATStates(hsqldb));
// Ensure the current prune heights have increased
assertEquals(901, repository.getBlockRepository().getBlockPruneHeight());
assertEquals(901, repository.getATRepository().getAtPruneHeight());
// Now ensure the SQL repository is missing blocks 2 and 900...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(900));
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(901));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Ensure blocks 2-900 are all available in the archive
for (int i=2; i<=900; i++) {
assertNotNull(repository.getBlockArchiveRepository().fromHeight(i));
}
// Ensure blocks 2-900 are NOT available in the db
for (int i=2; i<=900; i++) {
assertNull(repository.getBlockRepository().fromHeight(i));
}
// Ensure blocks 901 to 1002 and their AT states are available in the db
for (int i=901; i<=1002; i++) {
assertNotNull(repository.getBlockRepository().fromHeight(i));
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStates);
assertEquals(1, atStates.size());
}
// Ensure blocks 901 to 1002 are not available in the archive
for (int i=901; i<=1002; i++) {
assertNull(repository.getBlockArchiveRepository().fromHeight(i));
}
}
}
@Test
public void testBulkArchiveAndPruneMultipleFiles() throws DataException, SQLException {
try (final Repository repository = RepositoryManager.getRepository()) {
HSQLDBRepository hsqldb = (HSQLDBRepository) repository;
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(900, maximumArchiveHeight);
// Check the current archive height
assertEquals(0, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Write blocks 2-900 to the archive (using bulk method)
int fileSizeTarget = 42000; // Pre-calculated size of approx 90 blocks
assertTrue(HSQLDBDatabaseArchiving.buildBlockArchive(repository, fileSizeTarget));
// Ensure 10 archive files have been created
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive");
assertEquals(10, new File(archivePath.toString()).list().length);
// Check the files exist
assertTrue(Files.exists(Paths.get(archivePath.toString(), "2-90.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "91-179.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "180-268.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "269-357.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "358-446.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "447-535.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "536-624.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "625-713.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "714-802.dat")));
assertTrue(Files.exists(Paths.get(archivePath.toString(), "803-891.dat")));
// Ensure the block archive height has increased
// It won't be as high as 901, because blocks 892-901 were too small to reach the file size
// target of the 11th file
assertEquals(892, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the SQL repository contains blocks 2 and 891...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(891));
// Check the current prune heights
assertEquals(0, repository.getBlockRepository().getBlockPruneHeight());
assertEquals(0, repository.getATRepository().getAtPruneHeight());
// Prior to archiving or pruning, ensure blocks 2 to 1002 and their AT states are available in the db
for (int i=2; i<=1002; i++) {
assertNotNull(repository.getBlockRepository().fromHeight(i));
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStates);
assertEquals(1, atStates.size());
}
// Prune all the archived blocks and AT states (using bulk method)
assertTrue(HSQLDBDatabasePruning.pruneBlocks(hsqldb));
assertTrue(HSQLDBDatabasePruning.pruneATStates(hsqldb));
// Ensure the current prune heights have increased
assertEquals(892, repository.getBlockRepository().getBlockPruneHeight());
assertEquals(892, repository.getATRepository().getAtPruneHeight());
// Now ensure the SQL repository is missing blocks 2 and 891...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(891));
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(892));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Ensure blocks 2-891 are all available in the archive
for (int i=2; i<=891; i++) {
assertNotNull(repository.getBlockArchiveRepository().fromHeight(i));
}
// Ensure blocks 2-891 are NOT available in the db
for (int i=2; i<=891; i++) {
assertNull(repository.getBlockRepository().fromHeight(i));
}
// Ensure blocks 892 to 1002 and their AT states are available in the db
for (int i=892; i<=1002; i++) {
assertNotNull(repository.getBlockRepository().fromHeight(i));
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStates);
assertEquals(1, atStates.size());
}
// Ensure blocks 892 to 1002 are not available in the archive
for (int i=892; i<=1002; i++) {
assertNull(repository.getBlockArchiveRepository().fromHeight(i));
}
}
}
@Test
public void testTrimArchivePruneAndOrphan() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
}
// Make sure that block 500 has full AT state data and data hash
List<ATStateData> block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
// Trim the first 500 blocks
repository.getBlockRepository().trimOldOnlineAccountsSignatures(0, 500);
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(501);
repository.getATRepository().trimAtStates(0, 500, 1000);
repository.getATRepository().setAtTrimHeight(501);
// Now block 500 should only have the AT state data hash
block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNull(atStatesData.getStateData());
// ... but block 501 should have the full data
List<ATStateData> block501AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(501);
atStatesData = repository.getATRepository().getATStateAtHeight(block501AtStatesData.get(0).getATAddress(), 501);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
assertEquals(500, maximumArchiveHeight);
BlockData block3DataPreArchive = repository.getBlockRepository().fromHeight(3);
// Write blocks 2-500 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
assertEquals(500 - 1, writer.getWrittenCount()); // -1 for the genesis block
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(500 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
// Ensure the SQL repository contains blocks 2 and 500...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(500));
// Prune all the archived blocks
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 500);
assertEquals(500-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(501);
// Prune the AT states for the archived blocks
repository.getATRepository().rebuildLatestAtStates();
int numATStatesPruned = repository.getATRepository().pruneAtStates(2, 500);
assertEquals(499, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(501);
// Now ensure the SQL repository is missing blocks 2 and 500...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(500));
// ... but it's not missing blocks 1 and 501 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(501));
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
// Now orphan some unarchived blocks.
BlockUtils.orphanBlocks(repository, 500);
assertEquals(502, (int) repository.getBlockRepository().getLastBlock().getHeight());
// We're close to the lower limit of the SQL database now, so
// we need to import some blocks from the archive
BlockArchiveUtils.importFromArchive(401, 500, repository);
// Ensure the SQL repository now contains block 401 but not 400...
assertNotNull(repository.getBlockRepository().fromHeight(401));
assertNull(repository.getBlockRepository().fromHeight(400));
// Import the remaining 399 blocks
BlockArchiveUtils.importFromArchive(2, 400, repository);
// Verify that block 3 matches the original
BlockData block3DataPostArchive = repository.getBlockRepository().fromHeight(3);
assertArrayEquals(block3DataPreArchive.getSignature(), block3DataPostArchive.getSignature());
assertEquals(block3DataPreArchive.getHeight(), block3DataPostArchive.getHeight());
// Orphan 1 more block, which should be the last one that is possible to be orphaned
BlockUtils.orphanBlocks(repository, 1);
// Orphan another block, which should fail
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
}
}
/**
* Many nodes are missing an ATStatesHeightIndex due to an earlier bug
* In these cases we disable archiving and pruning as this index is a
* very essential component in these processes.
*/
@Test
public void testMissingAtStatesHeightIndex() throws DataException, SQLException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
// Firstly check that we're able to prune or archive when the index exists
assertTrue(repository.getATRepository().hasAtStatesHeightIndex());
assertTrue(RepositoryManager.canArchiveOrPrune());
// Delete the index
repository.prepareStatement("DROP INDEX ATSTATESHEIGHTINDEX").execute();
// Ensure check that we're unable to prune or archive when the index doesn't exist
assertFalse(repository.getATRepository().hasAtStatesHeightIndex());
assertFalse(RepositoryManager.canArchiveOrPrune());
}
}
private void deleteArchiveDirectory() {
// Delete archive directory if exists
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
FileUtils.deleteDirectory(archivePath.toFile());
} catch (IOException e) {
}
}
}

View File

@@ -0,0 +1,245 @@
package org.qortal.test;
import org.apache.commons.io.FileUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.controller.Controller;
import org.qortal.data.block.BlockData;
import org.qortal.repository.*;
import org.qortal.settings.Settings;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.Common;
import org.qortal.transform.TransformationException;
import org.qortal.utils.NTP;
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.NoSuchFileException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import static org.junit.Assert.*;
public class BootstrapTests extends Common {
@Before
public void beforeTest() throws DataException, IOException {
Common.useSettingsAndDb(Common.testSettingsFilename, false);
NTP.setFixedOffset(Settings.getInstance().getTestNtpOffset());
this.deleteBootstraps();
}
@After
public void afterTest() throws DataException, IOException {
this.deleteBootstraps();
this.deleteExportDirectory();
}
@Test
public void testCheckRepositoryState() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
this.buildDummyBlockchain(repository);
Bootstrap bootstrap = new Bootstrap(repository);
assertTrue(bootstrap.checkRepositoryState());
}
}
@Test
public void testValidateBlockchain() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
this.buildDummyBlockchain(repository);
Bootstrap bootstrap = new Bootstrap(repository);
assertTrue(bootstrap.validateBlockchain());
}
}
@Test
public void testCreateAndImportBootstrap() throws DataException, InterruptedException, TransformationException, IOException {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive", "2-900.dat");
BlockData block1000;
byte[] originalArchiveContents;
try (final Repository repository = RepositoryManager.getRepository()) {
this.buildDummyBlockchain(repository);
Bootstrap bootstrap = new Bootstrap(repository);
Path bootstrapPath = bootstrap.getBootstrapOutputPath();
// Ensure the compressed bootstrap doesn't exist
assertFalse(Files.exists(bootstrapPath));
// Create bootstrap
bootstrap.create();
// Ensure the compressed bootstrap exists
assertTrue(Files.exists(bootstrapPath));
// Ensure the original block archive file exists
assertTrue(Files.exists(archivePath));
originalArchiveContents = Files.readAllBytes(archivePath);
// Ensure block 1000 exists in the repository
block1000 = repository.getBlockRepository().fromHeight(1000);
assertNotNull(block1000);
// Ensure we can retrieve block 10 from the archive
assertNotNull(repository.getBlockArchiveRepository().fromHeight(10));
// Now delete block 1000
repository.getBlockRepository().delete(block1000);
assertNull(repository.getBlockRepository().fromHeight(1000));
// Overwrite the archive with dummy data, and verify it
try (PrintWriter out = new PrintWriter(archivePath.toFile())) {
out.println("testdata");
}
String newline = System.getProperty("line.separator");
assertEquals("testdata", Files.readString(archivePath).replace(newline, ""));
// Ensure we can no longer retrieve block 10 from the archive
assertNull(repository.getBlockArchiveRepository().fromHeight(10));
// Import the bootstrap back in
bootstrap.importFromPath(bootstrapPath);
}
// We need a new connection because we have switched to a new repository
try (final Repository repository = RepositoryManager.getRepository()) {
// Ensure the block archive file exists
assertTrue(Files.exists(archivePath));
// and that its contents match the original
assertArrayEquals(originalArchiveContents, Files.readAllBytes(archivePath));
// Make sure that block 1000 exists again
BlockData newBlock1000 = repository.getBlockRepository().fromHeight(1000);
assertNotNull(newBlock1000);
// and ensure that the signatures match
assertArrayEquals(block1000.getSignature(), newBlock1000.getSignature());
// Ensure we can retrieve block 10 from the archive
assertNotNull(repository.getBlockArchiveRepository().fromHeight(10));
}
}
private void buildDummyBlockchain(Repository repository) throws DataException, InterruptedException, TransformationException, IOException {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
for (int i = 0; i < 1000; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
// Write blocks 2-900 to the archive
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(901);
// Prune all the archived blocks
repository.getBlockRepository().pruneBlocks(0, 900);
repository.getBlockRepository().setBlockPruneHeight(901);
// Prune the AT states for the archived blocks
repository.getATRepository().rebuildLatestAtStates();
repository.getATRepository().pruneAtStates(0, 900);
repository.getATRepository().setAtPruneHeight(901);
// Refill cache, used by Controller.getMinimumLatestBlockTimestamp() and other methods
Controller.getInstance().refillLatestBlocksCache();
repository.saveChanges();
}
@Test
public void testGetRandomHost() {
String[] bootstrapHosts = Settings.getInstance().getBootstrapHosts();
List<String> uniqueHosts = new ArrayList<>();
for (int i=0; i<1000; i++) {
Bootstrap bootstrap = new Bootstrap();
String randomHost = bootstrap.getRandomHost();
assertNotNull(randomHost);
if (!uniqueHosts.contains(randomHost)){
uniqueHosts.add(randomHost);
}
}
// Ensure we have more than one bootstrap host in the settings
assertTrue(Arrays.asList(bootstrapHosts).size() > 1);
// Ensure that all have been given the opportunity to be used
assertEquals(uniqueHosts.size(), Arrays.asList(bootstrapHosts).size());
}
private void deleteBootstraps() throws IOException {
try {
Path path = Paths.get(String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), "bootstrap-archive.7z"));
Files.delete(path);
} catch (NoSuchFileException e) {
// Nothing to delete
}
try {
Path path = Paths.get(String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), "bootstrap-toponly.7z"));
Files.delete(path);
} catch (NoSuchFileException e) {
// Nothing to delete
}
try {
Path path = Paths.get(String.format("%s%s", Settings.getInstance().getBootstrapFilenamePrefix(), "bootstrap-full.7z"));
Files.delete(path);
} catch (NoSuchFileException e) {
// Nothing to delete
}
}
private void deleteExportDirectory() {
// Delete archive directory if exists
Path archivePath = Paths.get(Settings.getInstance().getExportPath());
try {
FileUtils.deleteDirectory(archivePath.toFile());
} catch (IOException e) {
}
}
}

View File

@@ -10,7 +10,12 @@ import org.qortal.utils.Base58;
import static org.junit.Assert.*;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.security.SecureRandom;
import java.util.Random;
import org.bouncycastle.crypto.agreement.X25519Agreement;
import org.bouncycastle.crypto.params.Ed25519PrivateKeyParameters;
@@ -40,6 +45,37 @@ public class CryptoTests extends Common {
assertArrayEquals(expected, digest);
}
@Test
public void testFileDigest() throws IOException {
byte[] input = HashCode.fromString("00").asBytes();
Path tempPath = Files.createTempFile("", ".tmp");
Files.write(tempPath, input, StandardOpenOption.CREATE);
byte[] digest = Crypto.digest(tempPath.toFile());
byte[] expected = HashCode.fromString("6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d").asBytes();
assertArrayEquals(expected, digest);
Files.delete(tempPath);
}
@Test
public void testFileDigestWithRandomData() throws IOException {
byte[] input = new byte[128];
new Random().nextBytes(input);
Path tempPath = Files.createTempFile("", ".tmp");
Files.write(tempPath, input, StandardOpenOption.CREATE);
byte[] fileDigest = Crypto.digest(tempPath.toFile());
byte[] memoryDigest = Crypto.digest(input);
assertArrayEquals(fileDigest, memoryDigest);
Files.delete(tempPath);
}
@Test
public void testPublicKeyToAddress() {
byte[] publicKey = HashCode.fromString("775ada64a48a30b3bfc4f1db16bca512d4088704975a62bde78781ce0cba90d6").asBytes();

View File

@@ -0,0 +1,390 @@
package org.qortal.test;
import org.apache.commons.io.FileUtils;
import org.bitcoinj.core.Address;
import org.bitcoinj.core.AddressFormatException;
import org.bitcoinj.core.ECKey;
import org.json.JSONArray;
import org.json.JSONObject;
import org.junit.After;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PublicKeyAccount;
import org.qortal.controller.tradebot.LitecoinACCTv1TradeBot;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.Litecoin;
import org.qortal.crosschain.LitecoinACCTv1;
import org.qortal.crosschain.SupportedBlockchain;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.crosschain.TradeBotData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.hsqldb.HSQLDBImportExport;
import org.qortal.settings.Settings;
import org.qortal.test.common.Common;
import org.qortal.utils.NTP;
import java.io.FileWriter;
import java.io.IOException;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import static org.junit.Assert.*;
public class ImportExportTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
this.deleteExportDirectory();
}
@After
public void afterTest() throws DataException {
this.deleteExportDirectory();
}
@Test
public void testExportAndImportTradeBotStates() throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Ensure no trade bots exist
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Create some trade bots
List<TradeBotData> tradeBots = new ArrayList<>();
for (int i=0; i<10; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
repository.getCrossChainRepository().save(tradeBotData);
tradeBots.add(tradeBotData);
}
// Ensure they have been added
assertEquals(10, repository.getCrossChainRepository().getAllTradeBotData().size());
// Export them
HSQLDBImportExport.backupTradeBotStates(repository);
// Delete them from the repository
for (TradeBotData tradeBotData : tradeBots) {
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
}
// Ensure they have been deleted
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Import them
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
Path filePath = Paths.get(exportPath.toString(), "TradeBotStates.json");
HSQLDBImportExport.importDataFromFile(filePath.toString(), repository);
// Ensure they have been imported
assertEquals(10, repository.getCrossChainRepository().getAllTradeBotData().size());
// Ensure all the data matches
for (TradeBotData tradeBotData : tradeBots) {
byte[] tradePrivateKey = tradeBotData.getTradePrivateKey();
TradeBotData repositoryTradeBotData = repository.getCrossChainRepository().getTradeBotData(tradePrivateKey);
assertNotNull(repositoryTradeBotData);
assertEquals(tradeBotData.toJson().toString(), repositoryTradeBotData.toJson().toString());
}
repository.saveChanges();
}
}
@Test
public void testExportAndImportCurrentTradeBotStates() throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Ensure no trade bots exist
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Create some trade bots
List<TradeBotData> tradeBots = new ArrayList<>();
for (int i=0; i<10; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
repository.getCrossChainRepository().save(tradeBotData);
tradeBots.add(tradeBotData);
}
// Ensure they have been added
assertEquals(10, repository.getCrossChainRepository().getAllTradeBotData().size());
// Export them
HSQLDBImportExport.backupTradeBotStates(repository);
// Delete them from the repository
for (TradeBotData tradeBotData : tradeBots) {
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
}
// Ensure they have been deleted
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Add some more trade bots
List<TradeBotData> additionalTradeBots = new ArrayList<>();
for (int i=0; i<5; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
repository.getCrossChainRepository().save(tradeBotData);
additionalTradeBots.add(tradeBotData);
}
// Export again
HSQLDBImportExport.backupTradeBotStates(repository);
// Import current states only
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
Path filePath = Paths.get(exportPath.toString(), "TradeBotStates.json");
HSQLDBImportExport.importDataFromFile(filePath.toString(), repository);
// Ensure they have been imported
assertEquals(5, repository.getCrossChainRepository().getAllTradeBotData().size());
// Ensure that only the additional trade bots have been imported and that the data matches
for (TradeBotData tradeBotData : additionalTradeBots) {
byte[] tradePrivateKey = tradeBotData.getTradePrivateKey();
TradeBotData repositoryTradeBotData = repository.getCrossChainRepository().getTradeBotData(tradePrivateKey);
assertNotNull(repositoryTradeBotData);
assertEquals(tradeBotData.toJson().toString(), repositoryTradeBotData.toJson().toString());
}
// None of the original trade bots should exist in the repository
for (TradeBotData tradeBotData : tradeBots) {
byte[] tradePrivateKey = tradeBotData.getTradePrivateKey();
TradeBotData repositoryTradeBotData = repository.getCrossChainRepository().getTradeBotData(tradePrivateKey);
assertNull(repositoryTradeBotData);
}
repository.saveChanges();
}
}
@Test
public void testExportAndImportAllTradeBotStates() throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Ensure no trade bots exist
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Create some trade bots
List<TradeBotData> tradeBots = new ArrayList<>();
for (int i=0; i<10; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
repository.getCrossChainRepository().save(tradeBotData);
tradeBots.add(tradeBotData);
}
// Ensure they have been added
assertEquals(10, repository.getCrossChainRepository().getAllTradeBotData().size());
// Export them
HSQLDBImportExport.backupTradeBotStates(repository);
// Delete them from the repository
for (TradeBotData tradeBotData : tradeBots) {
repository.getCrossChainRepository().delete(tradeBotData.getTradePrivateKey());
}
// Ensure they have been deleted
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Add some more trade bots
List<TradeBotData> additionalTradeBots = new ArrayList<>();
for (int i=0; i<5; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
repository.getCrossChainRepository().save(tradeBotData);
additionalTradeBots.add(tradeBotData);
}
// Export again
HSQLDBImportExport.backupTradeBotStates(repository);
// Import all states from the archive
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
Path filePath = Paths.get(exportPath.toString(), "TradeBotStatesArchive.json");
HSQLDBImportExport.importDataFromFile(filePath.toString(), repository);
// Ensure they have been imported
assertEquals(15, repository.getCrossChainRepository().getAllTradeBotData().size());
// Ensure that all known trade bots have been imported and that the data matches
tradeBots.addAll(additionalTradeBots);
for (TradeBotData tradeBotData : tradeBots) {
byte[] tradePrivateKey = tradeBotData.getTradePrivateKey();
TradeBotData repositoryTradeBotData = repository.getCrossChainRepository().getTradeBotData(tradePrivateKey);
assertNotNull(repositoryTradeBotData);
assertEquals(tradeBotData.toJson().toString(), repositoryTradeBotData.toJson().toString());
}
repository.saveChanges();
}
}
@Test
public void testExportAndImportLegacyTradeBotStates() throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Create some trade bots, but don't save them in the repository
List<TradeBotData> tradeBots = new ArrayList<>();
for (int i=0; i<10; i++) {
TradeBotData tradeBotData = this.createTradeBotData(repository);
tradeBots.add(tradeBotData);
}
// Create a legacy format TradeBotStates.json backup file
this.exportLegacyTradeBotStatesJson(tradeBots);
// Ensure no trade bots exist in repository
assertTrue(repository.getCrossChainRepository().getAllTradeBotData().isEmpty());
// Import the legacy format file
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
Path filePath = Paths.get(exportPath.toString(), "TradeBotStates.json");
HSQLDBImportExport.importDataFromFile(filePath.toString(), repository);
// Ensure they have been imported
assertEquals(10, repository.getCrossChainRepository().getAllTradeBotData().size());
for (TradeBotData tradeBotData : tradeBots) {
byte[] tradePrivateKey = tradeBotData.getTradePrivateKey();
TradeBotData repositoryTradeBotData = repository.getCrossChainRepository().getTradeBotData(tradePrivateKey);
assertNotNull(repositoryTradeBotData);
assertEquals(tradeBotData.toJson().toString(), repositoryTradeBotData.toJson().toString());
}
repository.saveChanges();
}
}
@Test
public void testExportAndImportMintingAccountData() throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Ensure no minting accounts exist
assertTrue(repository.getAccountRepository().getMintingAccounts().isEmpty());
// Create some minting accounts
List<MintingAccountData> mintingAccounts = new ArrayList<>();
for (int i=0; i<10; i++) {
MintingAccountData mintingAccountData = this.createMintingAccountData();
repository.getAccountRepository().save(mintingAccountData);
mintingAccounts.add(mintingAccountData);
}
// Ensure they have been added
assertEquals(10, repository.getAccountRepository().getMintingAccounts().size());
// Export them
HSQLDBImportExport.backupMintingAccounts(repository);
// Delete them from the repository
for (MintingAccountData mintingAccountData : mintingAccounts) {
repository.getAccountRepository().delete(mintingAccountData.getPrivateKey());
}
// Ensure they have been deleted
assertTrue(repository.getAccountRepository().getMintingAccounts().isEmpty());
// Import them
Path exportPath = HSQLDBImportExport.getExportDirectory(false);
Path filePath = Paths.get(exportPath.toString(), "MintingAccounts.json");
HSQLDBImportExport.importDataFromFile(filePath.toString(), repository);
// Ensure they have been imported
assertEquals(10, repository.getAccountRepository().getMintingAccounts().size());
// Ensure all the data matches
for (MintingAccountData mintingAccountData : mintingAccounts) {
byte[] privateKey = mintingAccountData.getPrivateKey();
MintingAccountData repositoryMintingAccountData = repository.getAccountRepository().getMintingAccount(privateKey);
assertNotNull(repositoryMintingAccountData);
assertEquals(mintingAccountData.toJson().toString(), repositoryMintingAccountData.toJson().toString());
}
repository.saveChanges();
}
}
private TradeBotData createTradeBotData(Repository repository) throws DataException {
byte[] tradePrivateKey = TradeBot.generateTradePrivateKey();
byte[] tradeNativePublicKey = TradeBot.deriveTradeNativePublicKey(tradePrivateKey);
byte[] tradeNativePublicKeyHash = Crypto.hash160(tradeNativePublicKey);
String tradeNativeAddress = Crypto.toAddress(tradeNativePublicKey);
byte[] tradeForeignPublicKey = TradeBot.deriveTradeForeignPublicKey(tradePrivateKey);
byte[] tradeForeignPublicKeyHash = Crypto.hash160(tradeForeignPublicKey);
String receivingAddress = "2N8WCg52ULCtDSMjkgVTm5mtPdCsUptkHWE";
// Convert Litecoin receiving address into public key hash (we only support P2PKH at this time)
Address litecoinReceivingAddress;
try {
litecoinReceivingAddress = Address.fromString(Litecoin.getInstance().getNetworkParameters(), receivingAddress);
} catch (AddressFormatException e) {
throw new DataException("Unsupported Litecoin receiving address: " + receivingAddress);
}
byte[] litecoinReceivingAccountInfo = litecoinReceivingAddress.getHash();
byte[] creatorPublicKey = new byte[32];
PublicKeyAccount creator = new PublicKeyAccount(repository, creatorPublicKey);
long timestamp = NTP.getTime();
String atAddress = "AT_ADDRESS";
long foreignAmount = 1234;
long qortAmount= 5678;
TradeBotData tradeBotData = new TradeBotData(tradePrivateKey, LitecoinACCTv1.NAME,
LitecoinACCTv1TradeBot.State.BOB_WAITING_FOR_AT_CONFIRM.name(), LitecoinACCTv1TradeBot.State.BOB_WAITING_FOR_AT_CONFIRM.value,
creator.getAddress(), atAddress, timestamp, qortAmount,
tradeNativePublicKey, tradeNativePublicKeyHash, tradeNativeAddress,
null, null,
SupportedBlockchain.LITECOIN.name(),
tradeForeignPublicKey, tradeForeignPublicKeyHash,
foreignAmount, null, null, null, litecoinReceivingAccountInfo);
return tradeBotData;
}
private MintingAccountData createMintingAccountData() {
// These don't need to be valid keys - just 32 byte strings for the purposes of testing
byte[] privateKey = new ECKey().getPrivKeyBytes();
byte[] publicKey = new ECKey().getPrivKeyBytes();
return new MintingAccountData(privateKey, publicKey);
}
private void exportLegacyTradeBotStatesJson(List<TradeBotData> allTradeBotData) throws IOException, DataException {
JSONArray allTradeBotDataJson = new JSONArray();
for (TradeBotData tradeBotData : allTradeBotData) {
JSONObject tradeBotDataJson = tradeBotData.toJson();
allTradeBotDataJson.put(tradeBotDataJson);
}
Path backupDirectory = HSQLDBImportExport.getExportDirectory(true);
String fileName = Paths.get(backupDirectory.toString(), "TradeBotStates.json").toString();
FileWriter writer = new FileWriter(fileName);
writer.write(allTradeBotDataJson.toString());
writer.close();
}
private void deleteExportDirectory() {
// Delete archive directory if exists
Path archivePath = Paths.get(Settings.getInstance().getExportPath());
try {
FileUtils.deleteDirectory(archivePath.toFile());
} catch (IOException e) {
}
}
}

View File

@@ -0,0 +1,91 @@
package org.qortal.test;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.controller.BlockMinter;
import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.Common;
import java.util.ArrayList;
import java.util.List;
import static org.junit.Assert.*;
public class PruneTests extends Common {
@Before
public void beforeTest() throws DataException {
Common.useDefaultSettings();
}
@Test
public void testPruning() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
// Alice self share online
List<PrivateKeyAccount> mintingAndOnlineAccounts = new ArrayList<>();
PrivateKeyAccount aliceSelfShare = Common.getTestAccount(repository, "alice-reward-share");
mintingAndOnlineAccounts.add(aliceSelfShare);
// Deploy an AT so that we have AT state data
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks
for (int i = 2; i <= 10; i++)
BlockMinter.mintTestingBlock(repository, mintingAndOnlineAccounts.toArray(new PrivateKeyAccount[0]));
// Make sure that all blocks have full AT state data and data hash
for (Integer i=2; i <= 10; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNotNull(blockData.getSignature());
assertEquals(i, blockData.getHeight());
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStatesDataList);
assertFalse(atStatesDataList.isEmpty());
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(atStatesDataList.get(0).getATAddress(), i);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
}
// Prune blocks 2-5
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 5);
assertEquals(4, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(6);
// Prune AT states for blocks 2-5
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 5);
assertEquals(4, numATStatesPruned);
repository.getATRepository().setAtPruneHeight(6);
// Make sure that blocks 2-5 are now missing block data and AT states data
for (Integer i=2; i <= 5; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNull(blockData);
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertTrue(atStatesDataList.isEmpty());
}
// ... but blocks 6-10 have block data and full AT states data
for (Integer i=6; i <= 10; i++) {
BlockData blockData = repository.getBlockRepository().fromHeight(i);
assertNotNull(blockData.getSignature());
List<ATStateData> atStatesDataList = repository.getATRepository().getBlockATStatesAtHeight(i);
assertNotNull(atStatesDataList);
assertFalse(atStatesDataList.isEmpty());
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(atStatesDataList.get(0).getATAddress(), i);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
}
}
}
}

View File

@@ -3,9 +3,12 @@ package org.qortal.test;
import org.junit.Before;
import org.junit.Test;
import org.qortal.account.Account;
import org.qortal.account.PublicKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.crosschain.BitcoinACCTv1;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@@ -22,13 +25,8 @@ import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.Random;
import java.util.concurrent.*;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
@@ -440,6 +438,119 @@ public class RepositoryTests extends Common {
}
}
@Test
public void testDefrag() throws DataException, TimeoutException {
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
@Test
public void testDefragOnDisk() throws DataException, TimeoutException {
Common.useSettingsAndDb(testSettingsFilename, false);
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
@Test
public void testMultipleDefrags() throws DataException, TimeoutException {
// Mint some more blocks to populate the database
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
for (int i = 0; i < 10; i++) {
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
}
@Test
public void testMultipleDefragsOnDisk() throws DataException, TimeoutException {
Common.useSettingsAndDb(testSettingsFilename, false);
// Mint some more blocks to populate the database
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
for (int i = 0; i < 10; i++) {
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
}
@Test
public void testMultipleDefragsWithDifferentData() throws DataException, TimeoutException {
for (int i=0; i<10; i++) {
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
}
@Test
public void testMultipleDefragsOnDiskWithDifferentData() throws DataException, TimeoutException {
Common.useSettingsAndDb(testSettingsFilename, false);
for (int i=0; i<10; i++) {
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
this.populateWithRandomData(hsqldb);
hsqldb.performPeriodicMaintenance(10 * 1000L);
}
}
}
private void populateWithRandomData(HSQLDBRepository repository) throws DataException {
Random random = new Random();
System.out.println("Creating random accounts...");
// Generate some random accounts
List<Account> accounts = new ArrayList<>();
for (int ai = 0; ai < 20; ++ai) {
byte[] publicKey = new byte[32];
random.nextBytes(publicKey);
PublicKeyAccount account = new PublicKeyAccount(repository, publicKey);
accounts.add(account);
AccountData accountData = new AccountData(account.getAddress());
repository.getAccountRepository().ensureAccount(accountData);
}
repository.saveChanges();
System.out.println("Creating random balances...");
// Fill with lots of random balances
for (int i = 0; i < 100000; ++i) {
Account account = accounts.get(random.nextInt(accounts.size()));
int assetId = random.nextInt(2);
long balance = random.nextInt(100000);
AccountBalanceData accountBalanceData = new AccountBalanceData(account.getAddress(), assetId, balance);
repository.getAccountRepository().save(accountBalanceData);
// Maybe mint a block to change height
if (i > 0 && (i % 1000) == 0)
BlockUtils.mintBlock(repository);
}
repository.saveChanges();
}
public static void hsqldbSleep(int millis) throws SQLException {
System.out.println(String.format("HSQLDB sleep() thread ID: %s", Thread.currentThread().getId()));

View File

@@ -21,6 +21,7 @@ import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.test.common.AtUtils;
import org.qortal.test.common.BlockUtils;
import org.qortal.test.common.Common;
import org.qortal.test.common.TransactionUtils;
@@ -35,13 +36,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetATStateAtHeightWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -58,13 +59,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetATStateAtHeightWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -75,7 +76,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = maxHeight - 2;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.getATRepository().trimAtStates(2, maxHeight, 1000);
ATStateData atStateData = repository.getATRepository().getATStateAtHeight(atAddress, testHeight);
@@ -87,13 +88,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetLatestATStateWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -111,13 +112,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetLatestATStatePostTrimming() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -129,7 +130,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = blockchainHeight;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
// COMMIT to check latest AT states persist / TEMPORARY table interaction
repository.saveChanges();
@@ -144,14 +145,66 @@ public class AtRepositoryTests extends Common {
}
@Test
public void testGetMatchingFinalATStatesWithoutDataValue() throws DataException {
byte[] creationBytes = buildSimpleAT();
public void testOrphanTrimmedATStates() throws DataException {
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
for (int i = 0; i < 10; ++i)
BlockUtils.mintBlock(repository);
int blockchainHeight = repository.getBlockRepository().getBlockchainHeight();
int maxTrimHeight = blockchainHeight - 4;
Integer testHeight = maxTrimHeight + 1;
// Trim AT state data
repository.getATRepository().rebuildLatestAtStates();
repository.saveChanges();
repository.getATRepository().trimAtStates(2, maxTrimHeight, 1000);
// Orphan 3 blocks
// This leaves one more untrimmed block, so the latest AT state should be available
BlockUtils.orphanBlocks(repository, 3);
ATStateData atStateData = repository.getATRepository().getLatestATState(atAddress);
assertEquals(testHeight, atStateData.getHeight());
// We should always have the latest AT state data available
assertNotNull(atStateData.getStateData());
// Orphan 1 more block
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
assertEquals(String.format("Can't find previous AT state data for %s", atAddress), exception.getMessage());
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
}
}
@Test
public void testGetMatchingFinalATStatesWithoutDataValue() throws DataException {
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -191,13 +244,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetMatchingFinalATStatesWithDataValue() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -237,13 +290,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetBlockATStatesAtHeightWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
doDeploy(repository, deployer, creationBytes, fundingAmount);
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint a few blocks
for (int i = 0; i < 10; ++i)
@@ -264,13 +317,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testGetBlockATStatesAtHeightWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
doDeploy(repository, deployer, creationBytes, fundingAmount);
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint a few blocks
for (int i = 0; i < 10; ++i)
@@ -280,7 +333,7 @@ public class AtRepositoryTests extends Common {
Integer testHeight = maxHeight - 2;
// Trim AT state data
repository.getATRepository().prepareForAtStateTrimming();
repository.getATRepository().rebuildLatestAtStates();
repository.getATRepository().trimAtStates(2, maxHeight, 1000);
List<ATStateData> atStates = repository.getATRepository().getBlockATStatesAtHeight(testHeight);
@@ -297,13 +350,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testSaveATStateWithData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -328,13 +381,13 @@ public class AtRepositoryTests extends Common {
@Test
public void testSaveATStateWithoutData() throws DataException {
byte[] creationBytes = buildSimpleAT();
byte[] creationBytes = AtUtils.buildSimpleAT();
try (final Repository repository = RepositoryManager.getRepository()) {
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = doDeploy(repository, deployer, creationBytes, fundingAmount);
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
// Mint a few blocks
@@ -364,67 +417,4 @@ public class AtRepositoryTests extends Common {
assertNull(atStateData.getStateData());
}
}
private byte[] buildSimpleAT() {
// Pretend we use 4 values in data segment
int addrCounter = 4;
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
ByteBuffer codeByteBuffer = ByteBuffer.allocate(512);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile AT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
private DeployAtTransaction doDeploy(Repository repository, PrivateKeyAccount deployer, byte[] creationBytes, long fundingAmount) throws DataException {
long txTimestamp = System.currentTimeMillis();
byte[] lastReference = deployer.getLastReference();
if (lastReference == null) {
System.err.println(String.format("Qortal account %s has no last reference", deployer.getAddress()));
System.exit(2);
}
Long fee = null;
String name = "Test AT";
String description = "Test AT";
String atType = "Test";
String tags = "TEST";
BaseTransactionData baseTransactionData = new BaseTransactionData(txTimestamp, Group.NO_GROUP, lastReference, deployer.getPublicKey(), fee, null);
TransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, atType, tags, creationBytes, fundingAmount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
TransactionUtils.signAndMint(repository, deployAtTransactionData, deployer);
return deployAtTransaction;
}
}

View File

@@ -0,0 +1,81 @@
package org.qortal.test.common;
import org.ciyam.at.CompilationException;
import org.ciyam.at.MachineState;
import org.ciyam.at.OpCode;
import org.qortal.account.PrivateKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.group.Group;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transaction.DeployAtTransaction;
import java.nio.ByteBuffer;
public class AtUtils {
public static byte[] buildSimpleAT() {
// Pretend we use 4 values in data segment
int addrCounter = 4;
// Data segment
ByteBuffer dataByteBuffer = ByteBuffer.allocate(addrCounter * MachineState.VALUE_SIZE);
ByteBuffer codeByteBuffer = ByteBuffer.allocate(512);
// Two-pass version
for (int pass = 0; pass < 2; ++pass) {
codeByteBuffer.clear();
try {
// Stop and wait for next block
codeByteBuffer.put(OpCode.STP_IMD.compile());
} catch (CompilationException e) {
throw new IllegalStateException("Unable to compile AT?", e);
}
}
codeByteBuffer.flip();
byte[] codeBytes = new byte[codeByteBuffer.limit()];
codeByteBuffer.get(codeBytes);
final short ciyamAtVersion = 2;
final short numCallStackPages = 0;
final short numUserStackPages = 0;
final long minActivationAmount = 0L;
return MachineState.toCreationBytes(ciyamAtVersion, codeBytes, dataByteBuffer.array(), numCallStackPages, numUserStackPages, minActivationAmount);
}
public static DeployAtTransaction doDeployAT(Repository repository, PrivateKeyAccount deployer, byte[] creationBytes, long fundingAmount) throws DataException {
long txTimestamp = System.currentTimeMillis();
byte[] lastReference = deployer.getLastReference();
if (lastReference == null) {
System.err.println(String.format("Qortal account %s has no last reference", deployer.getAddress()));
System.exit(2);
}
Long fee = null;
String name = "Test AT";
String description = "Test AT";
String atType = "Test";
String tags = "TEST";
BaseTransactionData baseTransactionData = new BaseTransactionData(txTimestamp, Group.NO_GROUP, lastReference, deployer.getPublicKey(), fee, null);
TransactionData deployAtTransactionData = new DeployAtTransactionData(baseTransactionData, name, description, atType, tags, creationBytes, fundingAmount, Asset.QORT);
DeployAtTransaction deployAtTransaction = new DeployAtTransaction(repository, deployAtTransactionData);
fee = deployAtTransaction.calcRecommendedFee();
deployAtTransactionData.setFee(fee);
TransactionUtils.signAndMint(repository, deployAtTransactionData, deployer);
return deployAtTransaction;
}
}

View File

@@ -2,8 +2,11 @@ package org.qortal.test.common;
import static org.junit.Assert.*;
import java.io.IOException;
import java.math.BigDecimal;
import java.net.URL;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.security.Security;
import java.util.ArrayList;
import java.util.Collections;
@@ -15,6 +18,7 @@ import java.util.function.Function;
import java.util.function.Predicate;
import java.util.stream.Collectors;
import org.apache.commons.io.FileUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
@@ -46,9 +50,15 @@ public class Common {
private static final Logger LOGGER = LogManager.getLogger(Common.class);
public static final String testConnectionUrl = "jdbc:hsqldb:mem:testdb";
// For debugging, use this instead to write DB to disk for examination:
// public static final String testConnectionUrl = "jdbc:hsqldb:file:testdb/blockchain;create=true";
public static final String testConnectionUrlMemory = "jdbc:hsqldb:mem:testdb";
public static final String testConnectionUrlDisk = "jdbc:hsqldb:file:%s/blockchain;create=true";
// For debugging, use testConnectionUrlDisk instead of memory, to write DB to disk for examination.
// This can be achieved using `Common.useSettingsAndDb(Common.testSettingsFilename, false);`
// where `false` specifies to use a repository on disk rather than one in memory.
// Make sure to also comment out `Common.deleteTestRepository();` in closeRepository() below, so that
// the files remain after the test finishes.
public static final String testSettingsFilename = "test-settings-v2.json";
@@ -100,7 +110,7 @@ public class Common {
return testAccountsByName.values().stream().map(account -> new TestAccount(repository, account)).collect(Collectors.toList());
}
public static void useSettings(String settingsFilename) throws DataException {
public static void useSettingsAndDb(String settingsFilename, boolean dbInMemory) throws DataException {
closeRepository();
// Load/check settings, which potentially sets up blockchain config, etc.
@@ -109,11 +119,15 @@ public class Common {
assertNotNull("Test settings JSON file not found", testSettingsUrl);
Settings.fileInstance(testSettingsUrl.getPath());
setRepository();
setRepository(dbInMemory);
resetBlockchain();
}
public static void useSettings(String settingsFilename) throws DataException {
Common.useSettingsAndDb(settingsFilename, true);
}
public static void useDefaultSettings() throws DataException {
useSettings(testSettingsFilename);
NTP.setFixedOffset(Settings.getInstance().getTestNtpOffset());
@@ -186,15 +200,33 @@ public class Common {
assertTrue(String.format("Non-genesis %s remains", typeName), remainingClone.isEmpty());
}
@BeforeClass
public static void setRepository() throws DataException {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(testConnectionUrl);
public static void setRepository(boolean inMemory) throws DataException {
String connectionUrlDisk = String.format(testConnectionUrlDisk, Settings.getInstance().getRepositoryPath());
String connectionUrl = inMemory ? testConnectionUrlMemory : connectionUrlDisk;
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(connectionUrl);
RepositoryManager.setRepositoryFactory(repositoryFactory);
}
public static void deleteTestRepository() throws DataException {
// Delete repository directory if exists
Path repositoryPath = Paths.get(Settings.getInstance().getRepositoryPath());
try {
FileUtils.deleteDirectory(repositoryPath.toFile());
} catch (IOException e) {
throw new DataException(String.format("Unable to delete test repository: %s", e.getMessage()));
}
}
@BeforeClass
public static void setRepositoryInMemory() throws DataException {
Common.deleteTestRepository();
Common.setRepository(true);
}
@AfterClass
public static void closeRepository() throws DataException {
RepositoryManager.closeRepositoryFactory();
Common.deleteTestRepository(); // Comment out this line in you need to inspect the database after running a test
}
// Test assertions

View File

@@ -1,9 +1,13 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "REGTEST",
"litecoinNet": "REGTEST",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -0,0 +1,13 @@
{
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0,
"pruneBlockLimit": 100,
"repositoryPath": "dbtest"
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-founder-rewards.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-leftover-reward.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-minting.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-qora-holder-extremes.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-qora-holder.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-reward-levels.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,7 +1,11 @@
{
"repositoryPath": "testdb",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2-reward-scaling.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100
}

View File

@@ -1,9 +1,14 @@
{
"repositoryPath": "testdb",
"bitcoinNet": "TEST3",
"litecoinNet": "TEST3",
"restrictedApi": false,
"blockchainConfig": "src/test/resources/test-chain-v2.json",
"exportPath": "qortal-backup-test",
"bootstrap": false,
"wipeUnconfirmedOnStart": false,
"testNtpOffset": 0,
"minPeers": 0
"minPeers": 0,
"pruneBlockLimit": 100,
"bootstrapFilenamePrefix": "test-"
}